text
stringlengths 256
16.4k
|
---|
Lately, I have some read some papers about the hidden sector of particle physics which combines with the Standard Model through the so-called Higgs portal. Let the Lagrangian for this be composed of two simple scalar fields like this:
$L=\partial_\mu \phi_{SM} \partial_\nu \phi_{SM} +\partial_\mu \phi_H \partial_\nu \phi_H -V(\phi_{SM},\phi_H)$
where $\phi_{SM}$ relates to the Standard Model and $\phi_H$ relates to the hidden sector.
Assuming that the potential is:
$V(\phi_{SM},\phi_H)=-1/2 \mu^2 {\phi_H} ^2 + 1/4 \lambda {\phi_H}^4 - 1/2 \mu^2 {\phi_{SM}} ^2 + 1/4 \lambda {\phi_{SM}}^4+ 1/4 \lambda_{mix} \phi_H^2 \phi_{SM}^2$
And since both of the fields gain a nonzero VEV like this:
$\phi_H$=$v_H+h(x)/2^{1/2}$
$\phi_{SM}$=$v_{SM}+h(x)/2^{1/2}$
How would the spontaneous symmetry breaking mechanism then work? I am interested in how exactly the Higgs mechanism would work mathematically when two different mimimas "v" are involved. This post imported from StackExchange Physics at 2014-06-25 21:00 (UCT), posted by SE-user user33941 |
The model you describe is known as the Blum-Shub-Smale (BSS) model (also Real RAM model) and indeed used to define complexity classes.
Some interesting problems in this domain are the classes $P_R$, $NP_R$, and of course the question of whether $P_R$ = $NP_R$. By $P_R$ we mean the problem is polynomially decidable, $NP_R$ is the problem is polynomially verifiable. There are hardness/completeness questions about the class $NP_R$. An example of an $NP_R$ complete problem is the the problem of $QPS$, Quadratic Polynomial System, where the input is real polynomials in $m$ variables, and $p_1, ..., p_n$ $\subseteq$ $R[x_1, ..., x_n]$ of degree at most 2, and each polynomial has at most 3 variables. The question whether there is a common real solution $R^n$, such that $p_1(a), p_2(a), ... p_n(a) = 0$. This is an $NP_R$ complete problem.
But more interestingly there has been some work on the relationship between $PCP$,(Probalistically Checkable Proofs), over the Reals, ie the class $PCP_R$, and how it relates to the algebraic computation models. The BSS model pans to all of $NP$ over reals. This is standard in literature, and what we know today is that $NP_R$ has "transparent long proofs", and "transparent short proofs". By "transparent long proofs" the following is implied: $NP_R$ is contained in $PCP_R(poly, O(1))$. There is also an extension which says, the the "Almost (approximated) Short Version" is true too. Can we stabilize the proof and detect faults by inspecting considerably less many (real) components than $n$? This leads to questions about existence of zeros for (system of) univariate polynomials given by straight line program. Also, by "transparent long proofs" we mean
"transparent" - Only, $O(1)$ to be read,
long - superpolynomial number of real components.
The proof is tied to $3SAT$, and sure one way to look at real valued problems is how it might be related to Subset Sum - even approximation algorithms for the real valued problems would be interesting -as for optimization - Linear Programming we know is in the class $FP$,but yes it would be interesting to see how approximatability might impact the completeness/ hardness for the case of $NP_R$ problems. Also, another question would be the $NP_R$ $=$ $co\text{-}NP_R$?
While thinking of the class $NP_R$, there are counting classes also defined to allow for reasoning about polynomial arithmetic. While $\#P$ is the class of functions $f$ defined over $\{0,1\}^\infty$ $\rightarrow$ $\mathbb{N}$ for which there exists a polynomial time Turing machine $M$ and a polynomial $p$ with the property that $\forall n $$\in$$ \mathbb{N}$, and $x$$\in$$\{0,1\}^{n}$, $f(x)$ counts the number of strings $y \in$$\{0,1\}^{p(n)}$that the Turing Machine $M$ accepts $\{x,y\}$. For reals we extend this idea there are additive BSS machines - BSS machines that do only addition, and multiplications (no divisions, no subtractions). With additive BSS machines(nodes in computation only allow addition, and multiplication) the model for $\# P$ becomes one in which the count is over the vectors that the additive BSS machines accepts. So, the counting class is $\#P_{add}$ this class is useful in the study of Betti numbers, and also the Euler characteristic. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Okay, now I've rather carefully discussed one example of \(\mathcal{V}\)-enriched profunctors, and rather sloppily discussed another. Now it's time to build the general framework that can handle both these examples.
We can define \(\mathcal{V}\)-enriched categories whenever \(\mathcal{V}\) is a monoidal preorder: we did that way back in Lecture 29. We can also define \(\mathcal{V}\)-enriched functors whenever \(\mathcal{V}\) is a monoidal preorder: we did that in Lecture 31. But to define \(\mathcal{V}\)-enriched profunctors, we need \(\mathcal{V}\) to be a bit better. We can see why by comparing our examples.
Our first example involved \(\mathcal{V} = \textbf{Bool}\). A
feasibility relation
$$ \Phi : X \nrightarrow Y $$ between preorders is a monotone function
$$ \Phi: X^{\text{op}} \times Y\to \mathbf{Bool} . $$ We shall see that a feasibility relation is the same as a \( \textbf{Bool}\)-enriched profunctor.
Our second example involved \(\mathcal{V} = \textbf{Cost}\). I said that a \( \textbf{Cost}\)-enriched profunctor
$$ \Phi : X \nrightarrow Y $$ between \(\mathbf{Cost}\)-enriched categories is a \( \textbf{Cost}\)-enriched functor
$$ \Phi: X^{\text{op}} \times Y \to \mathbf{Cost} $$ obeying some conditions. But I let you struggle to guess those conditions... without enough clues to make it easy!
To fit both our examples in a general framework, we start by considering an arbitrary monoidal preorder \(\mathcal{V}\). \(\mathcal{V}\)-enriched profunctors will go between \(\mathcal{V}\)-enriched categories. So, let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(\mathcal{V}\)-enriched categories. We want to make this definition:
Tentative Definition. A \(\mathcal{V}\)-enriched profunctor
$$ \Phi : \mathcal{X} \nrightarrow \mathcal{Y} $$ is a \(\mathcal{V}\)-enriched functor
$$ \Phi: \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} .$$ Notice that this handles our first example very well. But some questions appear in our second example - and indeed in general. For our tentative definition to make sense, we need three things:
We need \(\mathcal{V}\) to itself be a \(\mathcal{V}\)-enriched category.
We need any two \(\mathcal{V}\)-enriched category to have a 'product', which is again a \(\mathcal{V}\)-enriched category.
We need any \(\mathcal{V}\)-enriched category to have an 'opposite', which is again a \(\mathcal{V}\)-enriched category.
Items 2 and 3 work fine whenever \(\mathcal{V}\) is a commutative monoidal poset. We'll see why in Lecture 62.
Item 1 is trickier, and indeed it sounds rather scary. \(\mathcal{V}\) began life as a humble monoidal preorder. Now we're wanting it to be
enriched in itself! Isn't that circular somehow?
Yes! But not in a bad way. Category theory often eats its own tail, like the mythical ourobous, and this is an example.
To get \(\mathcal{V}\) to become a \(\mathcal{V}\)-enriched category, we'll demand that it be 'closed'. For starters, let's assume it's a monoidal
poset, just to avoid some technicalities. Definition. A monoidal poset is closed if for all elements \(x,y \in \mathcal{V}\) there is an element \(x \multimap y \in \mathcal{V}\) such that
$$ x \otimes a \le y \text{ if and only if } a \le x \multimap y $$ for all \(a \in \mathcal{V}\).
This will let us make \(\mathcal{V}\) into a \(\mathcal{V}\)-enriched category by setting \(\mathcal{V}(x,y) = x \multimap y \). But first let's try to understand this concept a bit!
We can check that our friend \(\mathbf{Bool}\) is closed. Remember, we are making it into a monoidal poset using 'and' as its binary operation: its full name is \( \lbrace \text{true},\text{false}\rbrace, \wedge, \text{true})\). Then we can take \( x \multimap y \) to be 'implication'. More precisely, we say \( x \multimap y = \text{true}\) iff \(x\) implies \(y\). Even more precisely, we define:
$$ \text{true} \multimap \text{true} = \text{true} $$$$ \text{true} \multimap \text{false} = \text{false} $$$$ \text{false} \multimap \text{true} = \text{true} $$$$ \text{false} \multimap \text{false} = \text{true} . $$
Puzzle 188. Show that with this definition of \(\multimap\) for \(\mathbf{Bool}\) we have
$$ a \wedge x \le y \text{ if and only if } a \le x \multimap y $$ for all \(a,x,y \in \mathbf{Bool}\).
We can also check that our friend \(\mathbf{Cost}\) is closed! Remember, we are making it into a monoidal poset using \(+\) as its binary operation: its full name is \( [0,\infty], \ge, +, 0)\). Then we can define \( x \multimap y \) to be 'subtraction'. More precisely, we define \(x \multimap y\) to be \(y - x\) if \(y \ge x\), and \(0\) otherwise.
Puzzle 189. Show that with this definition of \(\multimap\) for \(\mathbf{Cost}\) we have
$$ a + x \le y \text{ if and only if } a \le x \multimap y . $$But beware. We have defined the ordering on \(\mathbf{Cost}\) to be the
opposite of the usual ordering of numbers in \([0,\infty]\). So, \(\le\) above means the opposite of what you might expect!
Next, two more tricky puzzles. Next time I'll show you in general how a closed monoidal poset \(\mathcal{V}\) becomes a \(\mathcal{V}\)-enriched category. But to appreciate this, it may help to try some examples first:
Puzzle 190. What does it mean, exactly, to make \(\mathbf{Bool}\) into a \(\mathbf{Bool}\)-enriched category? Can you see how to do this by defining
$$ \mathbf{Bool}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Bool}\), where \(\multimap\) is defined to be 'implication' as above?
Puzzle 191. What does it mean, exactly, to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category? Can you see how to do this by defining
$$ \mathbf{Cost}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Cost}\), where \(\multimap\) is defined to be 'subtraction' as above?
Note: for Puzzle 190 you might be tempted to say "a \(\mathbf{Bool}\)-enriched category is just a preorder, so I'll use that fact here". However, you may learn more if you go back to the general definition of enriched category and use that! The reason is that we're trying to understand some general things by thinking about two examples.
Puzzle 192. The definition of 'closed' above is an example of a very important concept we keep seeing in this course. What is it? Restate the definition of closed monoidal poset in a more elegant, but equivalent, way using this concept. |
Abstract:
Using transversality and a dimension reduction argument, a result of A. Bezdek and W. Kuperberg is applied to polycylinders, showing that the optimal packing density of $\mathbb{D}^2\times \mathbb{R}^n$ equals $\pi/\sqrt{12}$ for all $n \ge 0$.
Comments and Corrigenda:
This paper was split before publication. In the published version, the second sentence “The closed unit interval is denoted by $\mathbb{I}$.” is extraneous. |
Problem A. 689. (February 2017) A. 689. Let \(\displaystyle f_1,f_2,\ldots\) be an infinite sequence of continuous \(\displaystyle \mathbb{R}\to\mathbb{R}\) functions such that for arbitrary positive integer \(\displaystyle k\) and arbitrary real numbers \(\displaystyle r>0\) and \(\displaystyle c\) there exists a number \(\displaystyle x\in(-r,r)\) with \(\displaystyle f_k(x)\ne cx\). Show that there exists a sequence \(\displaystyle a_1,a_2,\ldots\) of real numbers such that \(\displaystyle \sum_{n=1}^\infty a_n\) is convergent, but \(\displaystyle \sum_{n=1}^\infty f_k(a_n)\) is divergent for every positive integer \(\displaystyle k\).
(5 pont)
Deadline expired on March 10, 2017. Statistics:
8 students sent a solution. 5 points: Bukva Balázs, Gáspár Attila, Kovács 246 Benedek, Lajkó Kálmán, Williams Kada. 4 points: Matolcsi Dávid. 3 points: 1 student. 0 point: 1 student. |
Welcome to The Riddler. Every week, I offer up a problem related to the things we hold dear around here: math, logic and probability. These problems, puzzles and riddles come from lots of top-notch puzzle folks around the world — including you! You’ll find this week’s puzzle below.
Mull it over on your commute, dissect it on your lunch break and argue about it with your friends and lovers. When you’re ready,
submit your answer using the link below. I’ll reveal the solution next week, and a correct submission (chosen at random) will earn a shoutout in this column. Important small print: To be eligible, I need to receive your correct answer before 11:59 p.m. EDT on Sunday. Have a great weekend!
Before we get to the new puzzle, let’s return to last week’s. Congratulations to 👏
Rasmus Ibsen-Jensen 👏 of Vienna, Austria, our big winner. You can find a solution to the previous Riddler at the bottom of this post.
Now here’s this week’s Riddler, a Pokémon Go puzzle that comes to us from
Po-Shen Loh, a math professor at Carnegie Mellon University, the coach of U.S. International Math Olympiad team and the founder of expii.com.
Your neighborhood park is full of Pokéstops — places where you can restock on Pokéballs to, yes, catch more Pokémon! You are at one of them right now and want to visit them all. The Pokéstops are located at points whose (x, y) coordinates are integers on a fixed coordinate system in the park.
For any given pair of Pokéstops in your park, it is possible to walk from one to the other along a path that always goes from one Pokéstop to another Pokéstop adjacent to it. (Two Pokéstops are considered adjacent if they are at points that are exactly 1 unit apart. For example, Pokéstops at (3, 4) and (4, 4) would be considered adjacent.)
You’re an ambitious and efficient Pokémon trainer, who is also a bit of a homebody: You wish to visit each Pokéstop and return to where you started, while traveling the shortest possible total distance. In this open park, it is possible to walk in a straight line from any point to any other point — you’re not confined to the coordinate system’s grid. It turns out that this is a really hard problem, so you seek an approximate solution.
If there are N Pokéstops in total, find the upper and lower bounds on the total length of the optimal walk. (Your objective is to find bounds whose ratio is as close to 1 as possible.)
Advanced extra credit: For solvers who prefer a numerical question with this theme, suppose that the Pokéstops are located at every point with coordinates (x, y), where x and y are relatively prime positive integers less than or equal to 1,000. Find upper and lower bounds for the length of the optimal walk, again seeking bounds whose ratio is as close to 1 as possible.
Submit your answer
Need a hint? You can try asking me nicely. Want to submit a new puzzle or problem? Email me. I’m especially on the hunt for Riddler Jr. problems — puzzles that can stoke the curiosity and critical thinking of Riddler Nation’s younger compatriots.
And here’s the solution to last week’s Riddler, concerning a hungry, but persnickety, grizzly bear. The bear wants to maximize its intake of salmon, but is only willing to eat fish that are at least as big as all the fish it’s eaten already. If the bear will see two or three salmon during its fishing expedition, which weigh something uniformly random between 0 and 1 kilogram, it should
always eat every fish it can.
To see why, let’s start with the two-fish case. If the bear eats every fish it can — the “greedy” strategy — its expected fish intake is the expected weight of the first fish plus the expected weight of the second fish
given that it’s willing to eat it. Say the first fish has a weight \(x_1\) and the second fish a weight \(x_2\). The expectation under the greedy strategy is
$$\int_0^1 \left(x_1 +\int_{x_1}^1 x_2 dx_2\right)dx_1=\frac{5}{6}\approx 0.833$$
But if he forgoes the first fish, he’ll eat only the second fish, which is only half a kilogram on average. Therefore, always eat the first fish!
A similar argument holds for the three-fish case. We now know that if the bear forgoes the first fish, we’re back to the two-fish case, where it can expect 5/6 kilograms of salmon total. In fact, a nice pattern emerges. If the bear eats every fish it can on an expedition N fishes long, its expected intake is the first N terms of the sum 1/2 + 1/3 + 1/4 + 1/5 + 1/6 … So, for a three-fish expedition, eating the first fish yields 1/2 + 1/3 + 1/4 = 13/12 kilograms on average. Again, always eat the first fish!
But things change if the fishing expedition gets any longer.
Laurent Lessard compared optimal bear behavior to greedy bear behavior. For three fish or fewer, these strategies are identical. But for more fish, the bear does well to let certain, heavier fish go early on in the expedition, in order to maximize consumption in the future.
The 🏆 Coolest Riddler Extension Award 🏆 this week goes to
Kris Mycroft. Kris transported the problem to the arctic, and looked at the problem faced by polar bears in a network of streams.
Looks like it pays to be the alpha bear at the head of the river.
And, lest you think The Riddler is a mere diversion, comfortably abstracted from the struggles of daily life, the Alaska Salmon Program illustrated the puzzle with real-world bear footage in this delightful Twitter thread:
Elsewhere in the puzzling world:
Some puzzles on Olympic strategy. [The New York Times] Puzzles about a summer birthday party. [The Wall Street Journal] A few Olympics problems, right on time. [Expii] And, appropriately, a R-I-O puzzle. [NPR] A puzzle book of a very different kind. [Wired]
Have a wonderful weekend! |
Define the quadratic variation of a semimartingale $(X_t)_{t \geq 0}$ by
$$[X,X]_t := \mathbb{P}-\lim_{n \to \infty} \sum_{j=0}^n (X_{t_j}-X_{t_{j-1}})^2$$
where $\Pi_n := \{t_0<\ldots<t_n<t\}$ is a sequence of partitions such that $|\Pi_n| \to 0$. Moreover, we set $$[X,Y]_t := \frac{1}{4} ([X+Y]_t-[X-Y]_t).$$
Using this definition it is not difficult to show the following result.
Lemma 1 Let $(M_t)_{t \geq 0}$ a continuous square-integrable martingale and $(A_t)_{t \geq 0}$ a continuous process of bounded variation. Then,
$[M]_t$ is the unique previsible process such that $M_t^2-[M]_t$ is a martingale.
$[M,A]_t=[A,A]_t=0$
Now we are ready to calculate the quadratic covariation. By definition,
$$X_t\pm Y_t= \underbrace{\int_0^t (2+X_s \pm Y_s) \, dB_s}_{=:M_t} + \underbrace{c_{\pm}+ \int_0^t (6+3X_s \pm 3Y_s) \, ds}_{=:A_t}.$$
where $c_+=2$, $c_-=0$. Obviously, $(M_t)_{t \geq 0}$ is a (continuous) martingale and $(A_t)_{t \geq 0}$ of bounded variation. Consequently, we obtain by applying Lemma 1
$$[X \pm Y,X \pm Y]_t = [M,M]_t+2[M,A]_t+[A,A]_t = \int_0^t (2+X_s \pm Y_s)^2 \, ds.$$
Hence,
$$[X,Y]_t = \frac{1}{4} ([X+Y]_t-[X-Y]_t) = \int_0^t (2+X_s) \cdot Y_s \, ds \tag{1}$$
In order to compute $\mathbb{E}[X,Y]_t$, we have to find $\mathbb{E}Y_t$ and $\mathbb{E}(X_t \cdot Y_t)$. Since stochastic integrals with respect to a Brownian motion are martingales, we find
$$f(t) :=\mathbb{E}(Y_t)=1+\underbrace{\mathbb{E} \left( \int_0^t Y_s \, dB_s \right)}_{0} + 3 \int_0^t \mathbb{E}(Y_s) \, ds$$
i.e. $f$ satisfies the ODE
$$f'(t) = 3f(t) \qquad f(0)=1$$
Obviously, the unique solution is given by $$\mathbb{E}Y_t = f(t)= e^{3t} \tag{2}$$ Similarly, we obtain from Itô's formula that
$$g(t) := \mathbb{E}(X_t \cdot Y_t) =1+\mathbb{E} \left( \int_0^t(7X_s Y_s+8Y_s) \, ds \right) = 8 \int_0^t f(s) \, ds + 7 \int_0^t g(s) \, ds$$
i.e. $$g'(t) = 8f(t)+7g(t) = 8e^{3t}+7g(t) \tag{3}$$
This ODE can be solved explicitely; I leave it to you. Combining $(1)$, $(2)$ and $(3)$ allows us to compute $\mathbb{E}[X,Y]_t$. |
I saw a similar post at Automatic equation numbering in LyX but it didn't answer my question, so hopefully someone can help me out.
I have one of my key bindings set to:
command-sequence math-mode; math-mutate align;
I used this to insert align environments all over my document. None of these are numbered. My goal is
to number every existing equation in the document, or to selectively number existing equations without having to cut and paste my equations into new environments.
My LaTeX preamble is listed below. Does anyone know how to turn on numbering?
\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsmath,graphics, setspace}\usepackage{braket}\usepackage{color}\usepackage{multicol}\let\oldvec\vec\let\oldsum\sum\let\oldlim\lim\let\oldint\int\renewcommand{\vec}[1]{\oldvec{\mathbf{#1}}}\renewcommand{\sum}{\displaystyle\oldsum}\renewcommand{\lim}{\displaystyle\oldlim}\renewcommand{\int}{\displaystyle\oldint}\newif\ifsols\renewcommand{\ifsols}{\ifsols\color{red}}
Edit:
Your suggestions work for my purposes. Thanks for the clear explanations! There is still one problem though. Ideally I would like the equation numbering to continue across multiple align environments. I don't want the numbering to reset at each align environment. Right now, when I click "View" and make my Lyx file into a PDF, the numbers do not reset between environments, which is what I want to happen. However, within the Lyx program, before I make the PDF file the numbers appear to reset at each new align environment. Is it possible to fix this? |
Current browse context:
math.PR
Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Probability Title: Exceptional times for percolation under exclusion dynamics
(Submitted on 16 May 2016 (v1), last revised 28 Jun 2019 (this version, v5))
Abstract: We analyse in this paper a conservative analogue of the celebrated model of dynamical percolation introduced by H\"aggstr\"om, Peres and Steif in [HPS97]. It is simply defined as follows: start with an initial percolation configuration $\omega(t=0)$. Let this configuration evolve in time according to a simple exclusion process with symmetric kernel $K(x,y)$. We start with a general investigation (following [HPS97]) of this dynamical process $t \mapsto \omega_K(t)$ which we call $K$-exclusion dynamical percolation. We then proceed with a detailed analysis of the planar case at the critical point (both for the triangular grid and the square lattice $Z^2$) where we consider the power-law kernels $K^\alpha$ \[ K^{\alpha}(x,y) \propto \frac 1 {\|x-y\|_2^{2+\alpha}} \, . \] We prove that if $\alpha > 0$ is chosen small enough, there exist exceptional times $t$ for which an infinite cluster appears in $\omega_{K^{\alpha}}(t)$. (On the triangular grid, we prove that it holds for all $\alpha < \alpha_0 = \frac {217}{816}$.) The existence of such exceptional times for standard i.i.d. dynamical percolation (where sites evolve according to independent Poisson point processes) goes back to the work by Schramm-Steif in [SS10]. In order to handle such a $K$-exclusion dynamics, we push further the spectral analysis of exclusion noise sensitivity which had been initiated in [BGS13]. (The latter paper can be viewed as a conservative analogue of the seminal paper by Benjamini-Kalai-Schramm [BKS99] on i.i.d. noise sensitivity.) The case of a nearest-neighbour simple exclusion process, corresponding to the limiting case $\alpha = +\infty$, is left widely open. Submission historyFrom: Hugo Vanneuville [view email] [v1]Mon, 16 May 2016 13:28:06 GMT (158kb,D) [v2]Fri, 27 May 2016 18:24:36 GMT (159kb,D) [v3]Tue, 14 Nov 2017 15:32:18 GMT (162kb,D) [v4]Tue, 30 Apr 2019 14:27:40 GMT (161kb,D) [v5]Fri, 28 Jun 2019 14:17:27 GMT (161kb,D) |
The electronic configuration of an atom or molecule is a concept imposed by the orbital approximation. Spectroscopic transitions and other properties of atoms and molecules result from the states and not from the configurations, although it is useful to think about both the configuration and the state whenever possible. While a single determinant wavefunction generally is adequate for closed-shell systems (i.e. all electrons are paired in spatial orbitals), the best descriptions of the electronic states, especially for excited states and free radicals that have unpaired electrons, involve configuration interaction using multiple determinants. In these descriptions different configurations are mixed together and the picture of an orbital configuration disintegrates, and other properties, such as orbital and spin angular momentum and symmetry, are needed to identify and characterize the electronic states of molecules.
While a component of orbital angular momentum is preserved along the axis of a linear molecule, generally orbital angular momentum is quenched due to the irregular shapes of molecules. Angular momentum is quenched because circular motion is not possible when the potential energy function does not have circular symmetry.
The spin orbitals, however, still can be eigenfunctions of the spin angular momentum operators because the spin-orbit coupling usually is small. The resulting spin state depends on the orbital configuration. For a closed-shell configuration, the spin state is a singlet and the spin angular momentum is 0 because the contributions from the \(\alpha\) and \(\beta\) spins cancel. For an open shell configuration, which is characteristic of free radicals, there is an odd number of electrons and the spin quantum number \(s = \frac {1}{2}\). This configuration produces a doublet spin state since \(2S +1 = 2\). Excited configurations result when electromagnetic radiation or exposure to other sources of energy promotes an electron from an occupied orbital to a previously unoccupied orbital. An excited configuration for a closed shell system produces two states, a singlet state \((2S + 1 = 0)\) and a triplet state \((2S + 1 = 3)\) depending on how the electron spins are paired. The z-components of the angular momentum for 2 electrons can add to give +1, 0, or –1 in units of
ħ. The three spin functions for a triplet state are
\[ \alpha (1) \alpha (2)\]
\[\dfrac {1}{\sqrt {2}} [ \alpha (1) \beta (2) + \alpha (2) \beta (1)]\]
\[\beta (1) \beta (2) \label {10-75}\]
and the singlet spin function is
\[\dfrac {1}{\sqrt {2}} [ \alpha (1) \beta (2) + \alpha (2) \beta (1)] \label {10-76}\]
The singlet and triplet states differ in energy even though the electron configuration is the same. This difference results from the antisymmetry condition imposed on the wavefunctions. The antisymmetry condition reduces the electron-electron repulsion for triplet states, so triplet states have the lower energy.
The electronic states of molecules therefore are labeled and identified by their spin and orbital angular momentum and symmetry properties, as appropriate. For example, the ground state of the hydrogen molecule is designated as \(X^1\sum ^+_g\). In this symbol, the \(X\) identifies the state as the ground state, the superscript 1 identifies it as a singlet state, the sigma says the orbital angular momentum is 0, and the g identifies the wavefunction as symmetric with respect to inversion. Other states with the same symmetry and angular momentum properties are labeled as A, B, C, etc in order of increasing energy or order of discovery. States with different spin multiplicities from that of the ground state are labeled with lower case letters, a, b, c, etc.
For polyatomic molecules the symmetry designation and spin multiplicity are used. For example, an excited state of naphthalene is identified as \(^1B_{1u}\). The superscript 1 identifies it as a singlet state, The letter \(B\) and subscript 1 identifies the symmetry with respect to rotations, and the subscript u says the wavefunction is antisymmetric with respect to inversion.
Good quality descriptions of the electronic states of molecules are obtained by using a large basis set, by optimizing the parameters in the functions with the variational method, and by accounting for the electron-electron repulsion using the self-consistent field method. Electron correlation effects are taken into account with configuration interaction (CI). The CI methodology means that a wavefunction is written as a series of Slater Determinants involving different configurations, just as we discussed for the case of atoms. The limitation in this approach is that computer speed and capacity limit the size of the basis set and the number of configurations that can be used.
Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski |
From the “Simple English Wikipedia”
1: The Lorentz Factoris the name of the factor by which time, length, and “relativistic mass” change for an object while that object is moving and is often written γ (gamma). This number is determined by the object’s speed in the following way: $\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}$ Where vis the speed of the object and cis the speed of light (expressed in the same units as your speed). The quantity ( v/ c) is often labeled β (beta) and so the above equation can be rewritten: $\gamma = \frac{1}{\sqrt{1 - \beta^2}}$
Lets examine the Lorentz equation to see what it is actually describing. In the Reciprocal System, the speed of light,
c, is unity (1.0) in natural units of space and time—one unit of space per one unit of time. Conventional science uses “man-made” units that are derived by some kind of consensus. For example, the meter (meaning “measure”) was defined as one ten-millionth of the distance between the North Pole and the Equator. For the most part, conventional units are arbitrary. However, the Reciprocal System’s natural units are a consequence of the structure of nature, inherent in everything.
Starting with the velocity component, the factor
v/ c is simply normalization to unity, much like converting a range of values to percentages. Since the value of c is 1.0 in the Reciprocal System, the velocity in natural units is already normalized and this can just be reduced to v, making the concept of β unnecessary, as β represents the same consequence for arbitrary units.
We now have a normalized system of 1-v
2. Knowing that unity is the speed of light, this part of the equation is actually saying: c-v 2 (in natural units). Because c = 1, c n = 1 and n can have any value, so this part of the equation is actually . But when the square root function is considered, it becomes apparent that n = 2 and this Lorentz Factor is nothing more than the disguised equation of a right triangle that has been adjusted to express the speed of light a unit hypotenuse:
$\frac{1}{\gamma} = \sqrt{c^2-v^2}$
$\left( \frac{1}{\gamma} \right)^2 = c^2-v^2$
$c^2 = \left( \frac{1}{ \gamma} \right)^2 + v^2$
Alas, science also tends to overlook one of the more interesting properties of the square root—that the function returns
two solutions, a positive one and a negative one. The negative one is ignored (though the absolute value is never included in the Lorentz equation), because it would indicate that time, length and relativistic mass could also be . But if you consider both solutions simultaneously, then a bigger problems arises… they cancel each other out and you end up with the classic “division by zero” problem that allows you to do things like proving 2=1. negative 2(So don’t mention it and hope nobody notices.)
One should also note that the equation for a right triangle is also the
equation for a circle: , where r is the radius. Because r = c = 1, this is a unit circle, with the velocity on the x axis and the Lorentz factor being the corresponding value on the y axis.
By plotting (
v, 1/γ) in its entirety, the reciprocal relationship becomes clearer. What immediately stands out is that a velocity can drop all the way to -1, the speed of light running backwards. That may sound a bit strange, but once identified in conventional terms, it is a very familiar concept.
In the Reciprocal System, the speed of light (unit speed) is the fulcrum between motion in space and motion in time. As such, it is the
upper limit of both of those motions, essentially being the maximum speed of the universe, which is referred to as the progression of the natural reference system.
You can only slow down from this speed. In space, you add time, so the speed of 1
s/1 t becomes 1 s/ nt. In time, you add space going from 1 t/1 s to 1 t/ ns, remembering that when you cross the unit speed boundary, inversion takes place and speed, s/ t, becomes energy, t/ s.
This is indicated in the Lorentz Factor, because any value where
v>1 becomes undefined—there is no solution to the equation, because you would be moving at a velocity that is faster than the fastest velocity possible for the Universe. The system is only solvable if -1 ≤ v ≤ +1. Anything (photons, particles, atoms, molecules, etc) being carried by this progression will be moving at this “maximum speed of the universe.” Photons, having no net displacement in space or time (in a vacuum) have no resistance to this speed and will therefore be carried at this maximum speed, which is why we call it the speed of light, or in the Reciprocal System, unit speed, and why the speed of light is constant in all reference frames in Relativity. Photons are not actually moving on their own; they are just being carried by the progression—no relative motion to the speed of the progression.
This is why the speed of light, the maximum speed of the Universe, cannot be exceeded by any
velocity in space. 3 It has nothing to do with “infinite mass” or an object shrinking into nonexistence, which is how the Lorentz Factor is interpreted.
The relations in the Lorentz Factor, understood as a unit circle, do occur in the Reciprocal System—but under different names. Larson unknowingly uses it as the basis of his initial motions.
The problem is better understood in the complex plane, where the gamma function represents the imaginary axis (1/γ = -γ). By default, the Universe is expanding at unit speed, having the coordinates of (+1,0) on the diagram.
Larson then introduces the concept of a
direction reversal, which results in a linear vibration. This is moving inward (left on the v axis) to the coordinates (0,±1). The progression velocity appears to stop ( v=0), but there is now a split across the gamma axis, which is “imaginary” and rotational, creating the two, oppositely-directed rotations that are known as a birotation. 4 The resolution of this birotation can be expressed by Euler’s formula using the exponential functions:
$\frac{e^{+i \gamma} + e^{-i \gamma}}{2} = cos(\gamma)$
So this “direction reversal” results in a
cosine function, which Larson defines as a photon—the core of his rotating systems. 5
Now that he has this ±γ “line” to rotate, Larson adds an inward scalar
rotation to the photon, moving the net motion to the (-1,0) coordinate with a single speed solution, creating the rotational base, whose net motion opposes the progression at the same velocity, the speed of light running backwards that we call gravity, a very familiar concept.
Essentially, the Lorentz Factor is just a kludge hiding the use of imaginary quantities to describe a gravitational field structure, in a fashion similar to the imaginary quantities used to describe electric and magnetic fields. This gravitational opposition to the progression is what gives the
appearance of increasing mass—even though mass remains constant—since a “heavier” object must have more gravitational pull and be harder to move. The RS2 Approach
The Lorentz Fudge is a 1-dimensional solution to a 2-dimensional problem, as is Larson’s definition of the rotational base. However, the Universe is 3-dimensional and as William Hamilton discovered, it takes 4 dimensions to solve a 3-dimensional rotation: the
quaternion.
The RS2 solution was to upgrade the complex plane of the corrected Lorentz Factor and replace it with a quaternion. This, however, changes Larson’s 2-unit approach of
speed and energy into a 4-unit system of +1, i, i.j and i.j.k=-1. This resulted in a far more accurate representation of the photon, changing it from a linear vibration to a quaternion rotation with similar characteristics, but including electromagnetic properties with a 1-dimensional, electric rotation (k) combined with a 2-dimensional, magnetic rotation (i.j). Since i.j = k, a birotation can be formed along electromagnetic lines, using i.j.(-k), providing similar behavior to Prof. KVK Nehru’s original birotation model.
This will be elaborated on in a future paper, but just wanted to note the RS/RS2 difference.
Summary
The Lorentz Factor is the equation of a right triangle, where speed is normalized for a unit speed of light.
Ignoring the negative roots and velocities of the equation conceals the fact that the Lorentz Factor is actually just a unit circle.
Unit speed is the maximum speed the physical universe is capable of, expressed in the Reciprocal System as the
outward progression of the natural reference system. 6
The minimum speed is negative unity, the
inward motion expressed by gravitation.
The default speed of the Universe is unity. When a conventional object “at rest” is accelerated, what is actually happening is that the inward motion of gravity is being neutralized. A rocket isn’t increasing its speed by thrust—the thrust is reducing the effect gravitation is having upon it, allowing it to return to the default speed of unity (the speed of light).
It is impossible to accelerate an object past the speed of light
in space, because you are notadding velocity—you are reducing resistanceand once that resistance is gone, you are done. This is the situation in particle accelerators and why electromagnetic systems cannot accelerate a particle past the speed of light. All they can do is reduce the resistance preventing the particle from moving at the speed of light.
The circular form of the Lorentz Factor produces similar results to Larson’s construction of the rotational base.
When the 1-dimensional interpretation is upgraded to three dimensions, the linear vibration of the photon becomes a quaternion rotation possessing electromagnetic characteristics, such as TE, TM and TEM modes.
What the Lorentz Factor comes down to is a device that is used to try to understand the inward, “backwards speed of light” motion of gravitation, similar to Ptolemy's epicycle description of the reversal of planetary motion. But when placed in the proper context, one can see past the illusions of mathematics and understand the underlying concepts.
2 Let a=b. Then a
2 = ab; a 2+a 2 = a 2+ab; 2a 2 = a 2+ab; 2a 2-2ab = a 2+ab-2ab; 2a 2-2ab=a 2-ab; 2(a 2-ab)=1(a 2-ab); cancel (a 2-ab) from both sides gives 2=1.
3 I did qualify that, because faster-than-light motions are commonplace in the Reciprocal System, but manifest differently than “warp drive.” The translational velocities are always less then or equal to unit speed.
5 Larson’s solution is 2-dimensional; the 3-dimensional solution proposed by RS2 uses a quaternion rotation to accomplish the reversal, resulting in a more complex structure of the photon.
6 Known to astronomers as the
Hubble Expansion. |
Tangent, curve of the
The graph of the function $g=\tan x$ (Fig.a). The curve of the tangent is a periodic curve with period $T=\pi$ and asymptotes $x=(k+1/2)\pi$, $k\in\mathbf Z$. While $x$ varies from $-\pi/2$ to $+\pi/2$, $y$ grows monotonically from $-\infty$ to $+\infty$; thus, the curve of the tangent is composed of infinitely many separate congruent curves obtained from one another by translation over $k\pi$ along the $x$-axis. The points of intersection with the $x$-axis are $(k\pi,0)$. These are also the points of inflection, with inclination angle $\pi/4$ to the $x$-axis.
Figure: t092130a
The curve of the tangent reflected mirror-like in the $x$-axis and translated to the left over $\pi/2$ (Fig.b) becomes the graph of the function $y=\operatorname{cotan}x=-\tan(\pi/2+x)$ (cf. Cotangent); its asymptotes are $x=k\pi$; its intersections with the $x$-axis are $((k+1/2)\pi,0)$ and these points are also the points of inflection, with inclination angle $\pi/4$ with respect to the $x$-axis.
Figure: t092130b
How to Cite This Entry:
Tangent, curve of the.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Tangent,_curve_of_the&oldid=33317 |
The assessment of nodal planes is not as trivial as it might seem at first, because they are usually not planes at all.
Orbitals can be explained in terms of symmetry operations. There are three different operations and each of them has a unique element (in cartesian coordinates):
Rotation: Axis Mirror: Plane Inversion: Point
When we talk about hydrogen type atomic orbitals, we usually refer to their cartesian form. It is already fairly obvious for the $\ce{2s}$, that there is no nodal plane, while there are certainly nodes.
The representations of $\ce{s}$ orbitals are always trivial with respect to the total symmetry of the external field (= Point group). That means any symmetry operation, which is given through the external field will be matched by the $\ce{s}$ orbital. Another term for this is totally symmetric.
The representation of $\ce{p}$ orbitals are always antisymmetric with respect to inversion in any given external field.
The representation of $\ce{d}$ orbitals is always antisymmetric to two mirror planes. That excludes of course the $\ce{d_{z^2}}$ orbital, which is not a cartesian, but a spherical representation. (Note: In cartesian space there are 6 $\ce{d}$ orbitals. As quantum numbers only allow 5 spherical orbitals, they have to be transformed. See for example at chemissian.com or Schlegel and Frisch.) Early basis sets for quantum chemical calculations used the cartesian representation, because it wa easier to code and calculate.
Due to the geometric arrangement of nuclei in a molecule, the external field is given through the point group of the molecule. Every molecular orbital can be expressed through symmetry operations of that point group.
Cyclopropane has $D_\mathrm{3h}$ symmetry. And every molecular orbital has to respect that. I have already written about orbitals these orbitals in Why does cyclopropane give bromine water test?However, I would like to shed some more light on this, because I believe the picture you found is not quite correct.
The following picture shows the depicted orbitals from your source, as well as the LUMO. Where W1 corresponds to HOMO-2, W2 and W3 correspond to both HOMO, W4 corresponds to LUMO+3, finally W5 and W6 correspond to LUMO+2. The ordering of these orbitals is taken from a BP86/cc-pVTZ calculation of Gaussian09. A methodology that I believe gives fairly accurate results for molecular orbitals. (It is however noteworthy, that unoccupied orbitals are constructed from occupied orbitals. That means, that their physical meaning is limited.)
Let's have a closer look at the orbitals. The HOMO-2 is of $A_1'$ symmetry and therefore behaves like an $\ce{s}$ orbital. There is no antibonding nodal surface in this orbital. That means at any bonding axis of the ring structure the sign of the wavefunction does not change. There is however a nodal surface that does not affect bonding, as was already pointed out in some comments.
The HOMOs are of $E'$ symmetry and therefore behave like $\ce{p_x}$ and $\ce{p_y}$ orbitals. These orbitals have one antibonding nodal surface. In a result of symmetry restrictions these nodal surfaces are perpendicular to each other.
This is also true for the LUMO+2s. In addition to this there is another antibonding nodal surfaces with respect to the $\ce{C-H}$ bonds.
The LUMO again is of $A_1'$ symmetry and behaves like a $\ce{s}$ orbital. There is again no antibonding nodal surface with respect to the $\ce{C-C}$ bonds, but the main electron density would be outside of any bond and there is an antibonding nodal surfaces with respect to the $\ce{C-H}$ bonds.
The LUMO+3 is of $A_2'$ symmetry and does therefore not behave like any other atomic orbital. Here we find three antibonding nodal surfaces with respect to the $\ce{C-C}$ bonds.
The bonding situation in cyclopropane is very complicated and can be explained in many different ways. It is not as obvious, that the molecule is a $\sigma$ aromate, having in plane $\pi$ orbitals (HOMO). The explanation of its molecular orbitals were first composed in terms of a Walsh like behaviour on the basis of an extended Hückel calculation with a minimum basis.
When you disregard the $\ce{C-H}$ bonds and only allow three carbon $\ce{sp^2}$ and three carbon $\ce{p}$ orbitals, then you can only arrive at the boning picture that was depicted in your source (and many other publications), as you can only have six molecular orbitals formed from these. There are, however, many more atomic orbitals that mix into the final wavefunction. First of all it is necessary to state, that the used $\ce{sp^2}$ orbitals are not of the ideal $\frac13\ce{s}+\frac23\ce{p}$ composition (see more about that in the linked question and the answer of ron therein).
Now finally for the discussion about the correlation of nodal planes and increasing energy. It is a very common statement, that is missing it's most important limitation. It is only completely true for orbitals of the same symmetry. (This is also a necessary requirement.)
For example, the energy of the $\ce{s}$ orbital series increase with the main quantum number, as you add one more nodal surface per one increment in $n$: $\ce{1s->0; 2s->1; 3s->2;...}$. The same applies to the $\ce{p}$ orbital series: $\ce{2p->1; 3p->2; ...}$.
Now this statement became popular, when talking about aromatic systems. In the framework of Hückel's molecular orbital method, this must be true, since all regarded orbitals are of the same symmetry.
However, it is usually true, that an orbital with fewer nodal planes is more stable than another with more, but this is more a gut feeling, than actual, factual science. |
1 The Cosmic Sector
One of the outstanding achievements of the Reciprocal System of theory is the discovery of the fact that the physical universe is not limited to our familiar world of three dimensions of space and one dimension of time, the
material sector as Larson calls it. By virtue of the symmetry between the intrinsic natures of space and time, brought to light by Larson, he demonstrates the existence of a cosmic sector of the physical universe, wherein space-time relations are inverse of those germane to the material sector.
The normal features of the cosmic sector could be represented in a fixed three-dimensional temporal reference frame, just as those of the material sector could be represented in a fixed, three-dimensional spatial reference frame. In the universe of motion, the natural datum on which the physical universe is built is the outward progressional motion of space-time at unit speed (which is identified as the speed of light). The entities of the material sector are the result of downward displacement from the background speed of unity (speeds less than unity), while those of the cosmic sector are the result of upward displacement from unit (speeds greater than unity). But entities—like radiation—that move at the unit speed, being thereby at the boundary between the two sectors, are phenomena that are common to both these sectors.
Gravitation, being always in opposition to the outward space-time progression, is inward in scalar direction in the three-dimensional spatial or temporal reference frames. Since independent motion in the material sector (three-dimensional space) is motion in space, gravitation in our sector acts inward in space and results in large-scale aggregates of matter. Gravitation in the cosmic sector acts still inward but it is inward in three-dimensional time rather than in space. Consequently the cosmic sector equivalents of our stars and galaxies are aggregates in time rather than in space.
Further, as Larson points out, “…the various physical processes to which matter is subject alter positions in space independently of positions in time, and vice versa. As a result, the atoms of a material aggregate, which are contiguous in space, are widely dispersed in time, while the atoms of a cosmic aggregate, which are contiguous in time, are widely dispersed in space…
“Radiation moves at unit speed relative to both types of fixed reference systems, and can therefore be detected in both sectors regardless of where it originates. Thus we receive radiation from cosmic stars and other cosmic objects just as we do from the corresponding material aggregates. But these cosmic objects are not aggregates in space. They are randomly distributed in the spatial reference system. Their radiation is therefore received in space at a low intensity and in an isotropic distribution. Such a background radiation is actually being received.” 1 2 The Radiation Temperature
An approach to the derivation of the temperature of this cosmic background radiation is described now. This can be seen to involve the consideration of several other previously derived items like the relative cosmic abundances of the elements and their thermal destructive limits. To this extent, therefore, the present analysis has to be treated as provisional—a revision in the derivation of these items would entail a corresponding modification in the present derivation. Notwithstanding this, the general approach to the derivation described herein continues to be valid as far as it goes.
The basis for a quantitative inquiry into the properties of the phenomena of the cosmic sector, in general, is the fact that the space-time relations are inverted at the unit level. For instance, “…the cosmic property of inverse mass is observed in the material sector as a mass of inverse magnitude. Where a material atom has a mass of Z units on the atomic number scale, the corresponding cosmic atom has an
inverse mass of Z units which is observed in the material sector as if it were a mass of 1/Z units. 2
“Because of the inversion of space and time at the unit level, the frequencies of the cosmic radiation are the inverse of those of the radiation in the material sector. Cosmic stars emit radiation mainly in the infrared, rather than mainly at the optical frequencies .. and so on.”
3 Therefore, we expect the background radiation to be at a low temperature (that is, high inverse temperature). 2.1 Averaged Energy Density
We shall attempt to calculate the temperature of the background radiation by adopting the energy density approach. The energy density in space of blackbody radiation at a temperature of T Kelvin is given by
$U = b \times T^4 \frac{erg}{cm^3}$
(1)
where b = 7.5643×10
-15 erg-cm -3 K -4.
The major contribution to the background radiation is from the cosmic stars. As such, we shall attempt to arrive at the average energy density of the cosmic star radiation by finding the lumped average of the energy density of the radiation from all the stars in the material sector and then taking its inverse. At this juncture we should recognize a point of crucial importance which renders the analysis simple: to an observer in the cosmic sector the atoms at the center of a material sector star are as much exposed as the ones at its periphery, and the radiation from the interior atoms is as much observable as that from the outer atoms. This is because, as already mentioned, the locations of the atoms of a spatial aggregate are randomly and widely dispersed in the three-dimensional temporal reference frame. Analogously, to an observer in the material sector all the atoms of the cosmic sector star are observable. Since (i) the temperatures in the stellar core are larger by many orders of magnitude—nearly a billion times—than the temperatures in the outer regions of a star and (ii) energy density is proportional to the fourth power of temperature (Equation (1)), no appreciable error would be introduced if the energy density of the stellar radiation, originated in one sector but as observed in the opposite sector, is calculated on the basis of the central temperature alone.
The temperature prevailing at the center of a star is determined by the destructive temperature T
d of the heaviest element in it that is currently getting converted to radiation by the thermal neutralization process. On theoretical grounds we expect stars “burning”—that is, undergoing thermal neutralization—elements with atomic numbers ranging all the way from 117 down to a limiting value, Z s, to occur. Z s is the atomic number of the element which, as explained in detail elsewhere 4, when it arrives at the center of the star, leads to a chain of events culminating in the thermal destruction of the Co/Fe group of elements, in other words, in Type I supernova explosions. No star burning an element with atomic number less than Z s is possible because it would have disintegrated in the supernova explosions. Theoretical considerations suggest that Z s could be between 30 and 26. 4 The relevant energy density of the radiation of a star burning element Z at its center is
$U_z = b \times left( T_{d,z} right)^4 \frac{erg}{cm^3}$
(2)
where T
d,z is the thermal destructive limit of element Z, in kelvin.
Now it becomes necessary to estimate the proportion each of the stars with central temperature are the same as the destructive limit of the element Z, for Z = 117 to Z
s. Since the more abundant an element happens to be, the larger would be the number of stars burning it, on the basis of the cosmic abundance of the elements that is taken to be uniform throughout the universe, we can deduce the ratio of the number of stars burning element Z to the total number of stars as
$f_z = \frac{a_z}{S(a_z)}$
(3)
where a
z is the relative cosmic abundance of element Z and S( ) stands for,
$$\sum^{117}_{{}Z=Z_s} ()$$
Hence the expected energy density of the radiation from all the stars can be given by
$U = S(f_z U_z)$
(4)
2.2 The Inverse Energy Density
Because of the reciprocal relationship between corresponding quantities of the material and cosmic sectors, the energy density of the radiation from the cosmic stars would be the inverse of this quantity. But before taking the inverse we must convert the concerned quantities into the natural units from the conventional units. Thus the energy density in natural units is
$u = \frac{U}{(E_n S_n^{-3})}$
(5)
Where
En = natural unit of energy expressed in conventional units 5 = 1.49175×10-3 erg
and S
n = natural unit of space expressed in conventional units 5 = 4.558816×10-6 cm
We need to recognize now that radiation in the cosmic sector is dispersed in three-dimensional time whereas the material sector progresses linearly in one-dimensional time. A one-dimensional progression in the cosmic sector has two mutually opposite “directions” in time (say, AB and BA), only one of which is coincident with the “direction” of the time progression of the material sector. The total radiation from the cosmic sector is distributed equally between the two temporal directions and consequently the energy density apparent to us would be only half of the total. That is
$u_{app} = \frac{u}{2}$
(6)
Larson brings out this point of the relationship between the actual and the apparent luminosities while discussing the quasar radiation.
6 Finally, the energy density of the radiation from the cosmic stars as observed by us is in the inverse of this quantity
$ u_c = \frac{1}{u_{app}} = \frac{2}{u}$ in natural units
(7)
2.3hermal versus Inverse Thermal Distribution
At this juncture a question that naturally arises is that whether the nature of this radiation from the cosmic sector would be thermal or not. Especially, recalling what has been quoted from Reference 3 earlier, it is clear that this radiation is of the
inverse thermal type. Under these circumstances the adoption of Equation (1) is questionable since it pertains only to thermal radiation.
On examining the values of the thermal destructive limits of the elements, we find them all larger than the unit temperature, that is, the temperature corresponding to unit speed.
4 If we remember that the demarcations of the speed ranges of the material sector are as much applicable to the linear vibratory speeds (thermal motion) as to the linear translational speeds, it becomes apparent that the central temperatures of the material sector stars are in the intermediate range, that is, on the time-zero side of the one-dimensional range. 7
Quoting from Larson: “…ordinary thermal radiation is… produced by matter at temperatures below that corresponding to unit speed. Matter at temperatures above this level produces
inverse thermal radiation by the same process,… with an energy distribution that is the inverse of the normal distribution applicable to thermal radiation.” 8
From the foregoing the following syllogism suggests itself:
The energy distribution of a cosmic sector phenomenon would be the inverse of the energy distribution of the corresponding material sector phenomenon.
The phenomenon under consideration is the distribution of radiation from the core of a cosmic sector star.
The distribution of the radiation from the core of a material sector star is inverse thermal, since it originates in the intermediate temperature range.
Hence the distribution of the radiation from the core of a cosmic sector star would be the inverse of inverse thermal, that is, thermal.
2.4omparison with Observations
Reverting to the conventional units, we have the apparent energy density of the background radiation as
$U_c = u_c (E_n S_n^{-3}) \frac{erg}{cm^3}$
(8)
Finally the derived temperature of the background radiation, with the energy density given by Equation (8) is (adopting Equation (1))
$T_c = \left( \frac{U_c}{b} \right)^{1/4} K$
(9)
Substituting from Equations (4), (5), (7) and (8) in Equation (9) and simplifying
$T_c = 5.4257\times10^{13} \left[ \frac{S(a_z)}{S(a_z T^4_{d,z})} \right ]^{1/4} K$
(10)
Adopting the theoretically calculated values of a
z, the relative cosmic abundance 9 and T d,z, the thermal destructive limits 4 of the elements, the background temperature T c are worked out for Z s = 117, 116,… , 26. The listing of a Pascal program for this calculation is given in the Appendix. Some of the computed values of T c are listed in Table 1 for Z s values ranging from 31 to 26. Table 1: Computed Values of the Cosmic Background Radiation Temperature
31
2.989
30
2.798
29
2.614
28
2.435
27
2.587
26
2.739
The most probable candidate for Z
s, either from the theoretical considerations 4 or from the empirical cosmic abundance data turns out to be 30. The expected temperature of the background radiation corresponding to Z s = 30 can be seen to be 2.798 Kelvin. The observed values reported in the literature range from 23.74 to 2.9 Kelvin. It is instructive to note that the value of this temperature calculated on the basis of the element Fe (that is, Z s = 26) which according to Larson is the element responsible for the supernova explosion, turns out to be 2.74 Kelvin. This is in fair agreement with the recently published value of 2.75 Kelvin estimated from accurate observations. 10 Even though the derivation of the temperature of the background radiation described herein is cursory, if suffices to demonstrate that it could be derived from theory alone in the context of the Reciprocal System. 3 Conclusions
To highlight some of the important points brought out:
The stars of the cosmic sector of the physical universe are aggregates in time and are observed atom by atom, being randomly distributed in the three-dimensional space.
The radiation from these is observable as the cosmic background radiation: its absolute uniformity and isotropy resulting from item 3.1 above.
The distribution pattern of this radiation is inverse of inverse thermal, that is, thermal.
Since the radiation originating from the cosmic stars gets equally divided between the two opposite “directions” of any single time dimension, the apparent luminosity as observed from the spatial reference system of our material sector (which progresses “unidirectionally” in time) is half of the actual luminosity.
The energy density of the background radiation is the apparent energy density of the cosmic star radiation, which is the reciprocal of the energy density of the material star radiation after accounting for item 3.4 above.
The temperature of the background radiation computed for Z
s= 30 is 2.798 Kelvin and for Z s= 26 is 2.739 Kelvin (where Z sis the atomic number of the element at stellar core responsible for Type I supernova). These are in close agreement with the observational value of 2.75 Kelvin.
2 Dewey B. Larson,
Nothing but Motion, North Pacific Pub., 1979, p. 190
3 Dewey B. Larson,
The Universe of Motion, North Pacific Pub., 1984, p. 387
4 K.V.K. Nehru,
Intrinsic Variables, Supernovae and the Thermal Limit, Reciprocity, XVII № 1, Spring 1988, p. 20.
7
Ibid., Figure 8, p. 72.
8
Ibid., p. 246.
10 David T. Wilkinson,
Anisotropy of the Cosmic Blackbody Radiation, Science, Vol. 232, 20 June 1986, pp. 1517-1522. |
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result. |
Collisional Frequency is the average rate in which two reactants collide for a given system and is used to express the average number of collisions per unit of time in a defined system.
Background and Overview
To fully understand how the collisional frequency equation is derived, consider a simple system (a jar full of helium) and add each new concept in a step-by-step fashion. Before continuing with this topic, it is suggested that the articles on collision theory and collisional cross section are reviewed, as these topics are essential to understanding collisional frequency. The equation for collisional frequency is the following:
\(Z_{AB} = N_{A}N_{B}\left(r_{A} + r_{B}\right)^2\sqrt{ \dfrac{8\pi K_{B}T}{\mu_{AB}}} \)
Also, although technically these statements are false, the following assumptions are used when deriving and calculating the collisional frequency:
All molecules travel through space in straight lines. All molecules are hard, solid spheres. The reaction of interest is between only two molecules. Collisions are hit or miss only. They occur when distance between the center of the two reactants is less than or equal to the sum of their respective radii. Even if the two molecules barely miss each other, it is still considered a complete miss. The two molecules do not interact (in reality, their electron clouds would interact, but this has no bearing on the equation). Single Molecule Moving
In determining the collisional frequency for a single molecule, \(Z_i\), picture a jar filled with helium atoms. These atoms collide by hitting other helium atoms within the jar. If every atom except one is frozen and the number of collisions in one minute is counted, the collisional frequency (per minute) of a single atom of helium within the container could be determined. This is the basis for the equation.
\(Z_i = \dfrac{(Volume \; of \; Collisional \; Cylinder) (Density)}{Time}\)
While the helium atom is moving through space, it sweeps out a collisional cylinder, as shown above. If the center of another helium atom is present within the cylinder, a collision occurs. The length of the cylinder is the helium atom's mean relative speed, \(\sqrt{2}\langle c \rangle\), multiplied by change in time, \(\Delta{t}\). The mean relative speed is used instead of average speed because, in reality, the other atoms are moving and this factor accounts for some of that. The area of the cylinder is the helium atom's collisional cross section.
Although collision will most likely change the direction an atom moves, it does not affect the volume of the collisional cylinder, which is due to density being uniform throughout the system. Therefore, an atom has the same chance of colliding with another atom regardless of direction as long as the distance traveled is the same.
\(Volume \; of \; Collisional \; Cylinder = \sqrt{2}\pi{d^2}\langle c \rangle\Delta{t}\)
Density
Next, account must be taken of the other atoms that are moving that helium can hit; which is simply the density \(\rho\) of helium within the system. The density component can be expanded in terms of N and V. N is the number of atoms in the system, and V is the volume of the system. Alternatively, the density in terms of pressure (relating pressure to volume using the perfect gas law equation, PV = nRT:
\[\rho = \left(\dfrac{N}{V}\right) = \left(\dfrac{\rho{N_A}}{V}\right) = \left(\dfrac{\rho{N_A}}{RT}\right) = \left(\dfrac{\rho}{kT}\right)\]
The Full Equation
When you substitute in the values for Z
i, the following equation results:
\[{Z_{i} = \dfrac{\sqrt{2}\pi d^{2} \left \langle c \right \rangle\Delta{t}\left(\dfrac{N}{V}\right)}{\Delta{t}}}\]
Cancel Δt:
\[Z_{i} = \sqrt{2}\pi d^{2} \left \langle c \right \rangle\left(\dfrac{N}{V}\right)\]
All Molecules Moving System: \(Z_{ii}\)
Now imagine that all of the helium atoms in the jar are moving again. When all of the collisions for every atom of helium moving within the jar in a minute are counted, Z
ii results. The relation is thus:
\[Z_{ii} = \dfrac{1}{2}Z_{i}\left(\dfrac{N}{V}\right)\]
This expands to:
\[Z_{ii} = \dfrac{\sqrt{2}}{2}\pi d^{2}\left \langle c \right \rangle\left(\dfrac{N}{V}\right)^2\]
System With Collisions Between Different Types of Molecules: \(Z_{AB}\)
Consider a system of hydrogen in a jar:
\[H_{A} + H_{BC} \leftrightharpoons H_{AB} + H_{C}\]
In considering hydrogen in a jar instead of helium, there are several problems. First, the H
A ions have a smaller radius than the H BC molecules. This is easily solved by accounting for the different radii which changes \(d^{2}\) to \(\left(r_A + r_B\right)^2\).
The second problem is that the number of H
A ions could be much different than the number of H BC molecules. So we expand \(\dfrac{\sqrt{2}}{2}\left(\dfrac{N}{V}\right)^2\) to account for the number of both reacting molecules to get \(N_AN_B\). Because two reactants are considered, Z ii becomes Z AB, and the two changes are combined to give the following equation:
\[Z_{AB} = N_{A}N_{B}\pi\left(r_{A} + r_{B}\right)^2 \left \langle c \right \rangle\]
Mean speed, \( \left \langle c \right \rangle \), can be expanded:
\[ \left \langle c \right \rangle = \sqrt{\dfrac{8k_BT}{\pi m}}\]
This leads to the final change to the collisional frequency equation. Because two different molecules must be taken into account, the equation must accommodate molecules of different masses (m). So, mass (m) must be converted to reduced mass, \( \mu_{AB} \), converting a two bodied system to a one bodied system. Now we substitute \( \left \langle c \right \rangle \) in the Z
AB equation to obtain:
\[Z_{AB} = N_{A}N_{B}\left(r_{A} + r_{B}\right)^2\pi\sqrt{ \dfrac{8k_{B}T}{\pi\mu_{AB}}}\]
Cancel \(\pi\):
\[Z_{AB} = N_{A}N_{B}\left(r_{A} + r_{B}\right)^2\sqrt{\dfrac{8\pi{k_{B}T}}{\mu_{AB}}}\]
with
\(N_A\) is the number of A molecules in the system \(N_B\) is the number of B molecules in the system \(r_a\) is the radius of molecule A \(r_b\) is the radius of molecule B \(k_B\) is the Boltzmann constant \(k_B\) =1.380 x 10 -23Joules Kelvin \(T\) is the temperature in Kelvin \(\mu_{AB}\) is the reduced mass found by using the equation \(\mu_{AB} = \dfrac{m_Am_B}{m_A + m_B}\) Variables that affect Collisional Frequency Temperature:As is evident from the collisional frequency equation, when temperature increases, the collisional frequency increases. Density: From a conceptual point, if the density is increased, the number of molecules per volume is also increased. If everything else remains constant, a single reactant comes in contact with more atoms in a denser system. Thus if density is increased, the collisional frequency must also increase. Size of Reactants: Increasing the size of the reactants increases the collisional frequency. This is directly due to increasing the radius of the reactants as this increases the collisional cross section. This in turn increases the collisional cylinder. Because radius term is squared, if the radius of one of the reactants is doubled, the collisional frequency is quadrupled. If the radii of both reactants are doubled, the collisional frequency is increased by a factor of 16. Problems If the temperature of the system was increased, how would the collisional frequency be affected? If the masses of both the reactants were increased, how would the collisional frequency be affected?
0.4 moles of N
2gas (molecular diameter= 3.8x10 -10m and mass= 28 g/mol) occupies a 1-liter (0.001m 3) container at 1 atm of pressure and at room temperature (298K)
a) Calculate the number of collisions a single molecule makes in one second.(hint use Z
i)
b) Calculate the binary collision frequency.(hint use Z
ii) References Atkins, Peter and Julio de Paula. Physical Chemistry for the Life Sciences. 2006. New York, NY: W.H. Freeman and Company. pp.282-288, 290 Atkins, Peter. Physical Chemistry: Sixth Edition. 2000. New York, NY: W.H. Freeman and Company. pp.29,30 Atkins, Peter. Concepts in Physical Chemistry. 1995. New York, NY: W.H. Freeman and Company. pp.64 Collision frequency. Web. 14 Mar 2011. <http://www.everyscience.com/Chemistr...ses/c.1255.php> James, Ames. "Problem set 1." (2011): Print. Contributors Keith Dunaway (UCD), Imteaz Siddique (UCD) |
In a 3D cartesian domain I have four points $A=(x_1,y_1,z_1)$, $B=(x_2,y_2,z_2)$, $C=(x_3,y_3,z_3)$, $D=(x_4,y_4,z_4)$, and they form a plane $ABCD$. I want to find out, if an arbitrary point $P(x,y,z)$ lies within the plane $ABCD$.
Let's say you have a
convex, non-self-intersecting quadrilateral defined by its vertices $\vec{q}_i = ( x_i , y_i , z_i )$, $i = 1, 2, 3, 4$.
The first three vertices $\vec{q}_1$, $\vec{q}_2$, and $\vec{q}_3$ are on a plane with normal $\vec{q}$, $$\vec{n} = \left( \vec{q}_2 - \vec{q}_1 \right) \times \left( \vec{q}_3 - \vec{q}_1 \right) \tag{1}\label{NA1}$$ at signed distance $d$ from origin (in units of $\lVert\vec{n}\rVert$), $$d = \vec{n} \cdot \vec{q}_1 = \vec{n} \cdot \vec{q}_2 = \vec{n} \cdot \vec{q}_3 \tag{2}\label{NA2}$$ If the fourth vertex $\vec{q}_4$ is coplanar with the first three points, then $\vec{n} \cdot \vec{q}_4 = d$ too. For this question to make sense, let's assume so.
If we have an arbitrary point $\vec{p} = (x , y , z)$, it lies in the plane if and only if$$\vec{n} \cdot \vec{p} = d \tag{3}\label{NA3}$$This is the test you need to do for each arbitrary point $\vec{p}$. If it fails, the answer is
"No, point $\vec{p}$ is not on the plane", and no further testing is done.
Let's say $\vec{p}$ does lie on the plane, but we want to find out whether it is inside the quadrilateral. There are many ways to do this.
We'll obviously want to do that in 2D. To do so, we could project the points to the plane, but here's the trick: we can just drop one of the three coordinates, the one with the largest magnitude in $\vec{n}$. (This way, we actually project to $yz$, $xz$, or $xy$ plane, whichever is most perpendicular to $\vec{n}$.)
(Mathematically, we can do that by multiplying the vectors by matrices $\left[\begin{matrix} 0 & 1 & 0 \\ 0 & 0 & 1 \end{matrix}\right]$, $\left[\begin{matrix} 1 & 0 & 0 \\ 0 & 0 & 1 \end{matrix}\right]$, or $\left[\begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{matrix}\right]$, depending on which component of $\vec{n}$ is the largest in magnitude. In practice, we just drop the $x$, $y$, or $z$ coordinates, respectively, from all 3D vectors to transform them to 2D.)
Let's say the projected 2D coordinates are $\overline{q}_1 = ( u_1 , v_1 )$, $\overline{q}_2 = ( u_2 , v_2 )$, $\overline{q}_3 = ( u_3 , v_3 )$, $\overline{q}_4 = ( u_4 , v_4 )$, and $\overline{p} = ( u_0 , v_0 )$. If $\overline{p}$ is within the quadrilateral, it is on the same side of all four edges.
In 2D, if we have a line through $\overline{q}_1$ and $\overline{q}_2$, we can use the 2D analog of vector cross product ($(x_A , y_A)\times(x_B , y_B) = x_A y_B - y_A x_B$) to find the side $\overline{p}$ is on, based on the sign of $$(\overline{q}_2 - \overline{q}_1)\times(\overline{p} - \overline{q}_1) \tag{4}\label{NA4}$$ Note that if $\overline{p}$ is on the line, the above is zero.
Essentially, we only need to calculate $$\left\lbrace \; \begin{aligned} s_1 &= (\overline{q}_2 - \overline{q}_1)\times(\overline{p} - \overline{q}_1) = (v_1 - v_2) u_0 + (u_2 - u_1) v_0 + v_2 u_1 - u_2 v_1 \\ s_2 &= (\overline{q}_3 - \overline{q}_2)\times(\overline{p} - \overline{q}_2) = (v_2 - v_3) u_0 + (u_3 - u_2) v_0 + v_3 u_2 - u_3 v_2 \\ s_3 &= (\overline{q}_4 - \overline{q}_3)\times(\overline{p} - \overline{q}_3) = (v_3 - v_4) u_0 + (u_4 - u_3) v_0 + v_4 u_3 - u_4 v_3 \\ s_4 &= (\overline{q}_1 - \overline{q}_4)\times(\overline{p} - \overline{q}_4) = (v_4 - v_1) u_0 + (u_1 - u_4) v_0 + v_1 u_4 - u_1 v_4 \\ \end{aligned}\right. \tag{5}\label{NA5}$$ Then, if ($s_1 \le 0$, $s_2 \le 0$, $s_3 \le 0$, and $s_4 \le 0$) or ($s_1 \ge 0$, $s_2 \ge 0$, $s_3 \ge 0$, and $s_4 \ge 0$), point $\vec{p}$ is inside the quadrilateral on the plane; otherwise it is outside it.
In a computer program, you can precalculate and store the 12 quadrilateral constants ($(v_1 - v_2)$, $(v_2 - v_3)$, $(v_3 - v_4)$, $(v_4 - v_1)$, $(u_2 - u_1)$, $(u_3 - u_2)$, $(u_4 - u_3)$, $(u_1 - u_4)$, $(v_2 u_1 - u_2 v_1)$, $(v_3 u_2 - u_3 v_2)$, $(v_4 u_3 - u_4 v_3)$, and $(v_1 u_4 - u_1 v_4)$), so that you need at most 11 multiplications and 10 additions for each test with that quadrilateral (not including the 8 multiplications and 12 subtractions in the one-time precalculation). In practice, the conditional jumps (if clauses) needed tend to take more time than the calculation. In some languages like C you can rewrite the sets of four tests as counts (of how many values are less than zero, and how many greater than zero), so that only three conditional jumps get generated.
If the quadrilateral is not convex, you can use any point in polygon test for the projected point $\overline{p}$ in projected polygon $\overline{q}_i$ test. |
My problem is mainly from this lecture notes on convex optimization here page4
Consider a s-t Minimum problem, on unweighted undirected graph $G=(V,E)$,we can formalize in following linear integer programming problem
\begin{equation*} \begin{aligned} & \underset{}{\text{minimine}} & & \sum_{u,v\in E}|x_u-x_v| \\ & \text{subject to} & & x_s =1 ,x_t=0 &x_v\in \left\{ 0,1\right\} \forall v \in V \end{aligned} \end{equation*}
then we can relax to:
\begin{aligned} & \underset{}{\text{minimine}} & & \sum_{u,v\in E}|x_u-x_v| \\ & \text{subject to} & & x_s -x_t =1 \end{aligned}
for $0 \leq l \leq 1$ we define $S_l:= \left\{v|x_v \geq l\right\} $ then we have $$\sum_{u,v\in E}|x_u-x_v| \geq\int^{1}_{0}|\delta_l(S_l)|dl$$ where $\delta_l(S_l) $ denotes the crossing edge in $S_l$
how to see this inequality ? |
Talk:Absolute continuity
Could I suggest using $\lambda$ rather than $\mathcal L$ for Lebesgue measure since
it is very commonly used, almost standard it would be consistent with the notation for a general measure, $\mu$ calligraphic is being used already for $\sigma$-algebras
--Jjg 12:57, 30 July 2012 (CEST)
Between metric setting and References I would like to type the following lines. But for some reason which is misterious to me, any time I try the page comes out a mess... Camillo 10:45, 10 August 2012 (CEST)
if for every $\varepsilon$ there is a $\delta > 0$ such that, for any $a_1<b_1<a_2<b_2<\ldots < a_n<b_n \in I$ with $\sum_i |a_i -b_i| <\delta$, we have \[ \sum_i d (f (b_i), f(a_i)) <\varepsilon\, . \] The absolute continuity guarantees the uniform continuity. As for real valued functions, there is a characterization through an appropriate notion of derivative. Theorem 1A continuous function $f$ is absolutely continuous if and only if there is a function $g\in L^1_{loc} (I, \mathbb R)$ such that\begin{equation}\label{e:metric}d (f(b), f(a))\leq \int_a^b g(t)\, dt \qquad \forall a<b\in I\,\end{equation}(cp. with ). This theorem motivates the following Definition 2If $f:I\to X$ is a absolutely continuous and $I$ is compact, the metric derivative of $f$ is the function $g\in L^1$ with the smalles $L^1$ norm such that \ref{e:metric} holds (cp. with ) OK, I found a way around. But there must be some bug: it seems that whenever I write the symbol "bigger" then things gets messed up (now even on THIS page). Camillo 10:57, 10 August 2012 (CEST) How to Cite This Entry:
Absolute continuity.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Absolute_continuity&oldid=27468 |
Let:
$R$: The radius of the coil, $h$ the height of the coil, $n$: spiral density, ie, the number of spirals per height.
$r$: The radius of the wire, $A$: The area of the cross section of the wire.
$L$: The total size of the wiring, $N$: The amount of spirals in the coil.
$\bar R$: The overall resistance of the coil, $\rho$: resistivity of the material of the wire.
$B$: The magnetic field (ofc), $k$: The relative magnetic permeability of the core.
$V$: The voltage difference across the ends of the coil, $I$: The current passing thru the coil
$P$: Dissipated power by the coil.
With that in mind, some formulas relate our quantities:$$B = k\mu_0nI,\quad n = \frac{1}{2r},\quad L = 2\pi Rnh,\quad A = \pi r^2, \quad P = VI,\quad V = \bar RI,\quad \bar R = \frac{\rho L}{A}$$
These formulas comes from physical laws, or simple geometry. Also, we can relate the amount of spirals $N$ by $N = nh$. Don't forget that we want to minimize dissipated power, and maximize the amount of magnetic field generated. With that in mind, we shall fix $B$, and find $P$.$$P = VI = \bar RI^2 = \frac{\rho L}{A}\left(\frac{B}{k\mu_0 n}\right)^2 = \frac{\rho\cdot 2\pi Rnh}{\pi r^2}\left(\frac{B}{k\mu_0 n}\right)^2 = \frac{2\rho RhB^2}{nr^2k^2\mu_0^2} = \frac{4\rho RhB^2}{r k^2\mu_0^2} = \frac{8\rho RB^2}{k^2\mu_0^2}N$$
So, your goal is to maximize the most you can all variables in the denominator, and minimize the numerator, for a fixed $B$, in order to minimize the power you require to operate such a thing. I actually find quite interesting that $r$ should be maximized, in order to minimize $P$ for a fixed $B$. Intuition would say otherwise isn't it? You can do the same for voltage:$$V = \bar R I = \frac{\rho L}{A}\frac{B}{k\mu_0 n} = \frac{\rho\cdot 2\pi Rnh}{\pi r^2}\frac{B}{k\mu_0 n} = \frac{2\rho RhB}{k\mu_0r^2}$$
And for last, to find the necessary current, its a direct relationship with the magnetic field:$$I = \frac{B}{k\mu_0n} = \frac{2rB}{k\mu_0}$$
Hereby, our final results:$$I = \frac{2rB}{k\mu_0},\quad\quadV = \frac{2\rho RhB}{k\mu_0r^2},\quad\quadP = \frac{4\rho RhB^2}{r k^2\mu_0^2} = \frac{8\rho RB^2}{k^2\mu_0^2}N$$ |
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
The theorem states that
$$ H(P)\leq\mathrm{MinACL}(P)<H(P)+1 $$
where, $\mathrm{MinACL}$ means the minimum average code word length of a given information source, i.e. the average code word length of any Huffman coding and $H$ means the entropy of the probability distribution $P$.
Now, the problem is how to show that for any $\epsilon>0$, there is a probability distribution $P$ s.t. $\mathrm{MinACL}(P) - H(P)\geq1-\epsilon$?
(I was given a hint that I can start with a source s.t. $H(P)=\mathrm{MinACL}(P)$ and try to change the probabilities in order to skew the code.} |
It looks like you're new here. If you want to get involved, click one of these buttons!
We've been looking at feasibility relations, as our first example of enriched profunctors. Now let's look at another example. This combines many ideas we've discussed - but don't worry, I'll review them, and if you forget some definitions just click on the links to earlier lectures!
Remember, \(\mathbf{Bool} = \lbrace \text{true}, \text{false} \rbrace \) is the preorder that we use to answer true-or-false questions like
while \(\mathbf{Cost} = [0,\infty] \) is the preorder that we use to answer quantitative questions like
or
In \(\textbf{Cost}\) we use \(\infty\) to mean it's impossible to get from here to there: it plays the same role that \(\text{false}\) does in \(\textbf{Bool}\). And remember, the ordering in \(\textbf{Cost}\) is the
opposite of the usual order of numbers! This is good, because it means we have
$$ \infty \le x \text{ for all } x \in \mathbf{Cost} $$ just as we have
$$ \text{false} \le x \text{ for all } x \in \mathbf{Bool} .$$ Now, \(\mathbf{Bool}\) and \(\mathbf{Cost}\) are monoidal preorders, which are just what we've been using to define enriched categories! This let us define
and
We can draw preorders using graphs, like these:
An edge from \(x\) to \(y\) means \(x \le y\), and we can derive other inequalities from these. Similarly, we can draw Lawvere metric spaces using \(\mathbf{Cost}\)-weighted graphs, like these:
The distance from \(x\) to \(y\) is the length of the shortest directed path from \(x\) to \(y\), or \(\infty\) if no path exists.
All this is old stuff; now we're thinking about enriched profunctors between enriched categories.
A \(\mathbf{Bool}\)-enriched profunctor between \(\mathbf{Bool}\)-enriched categories also called a feasibility relation between preorders, and we can draw one like this:
What's a \(\mathbf{Cost}\)-enriched profunctor between \(\mathbf{Cost}\)-enriched categories? It should be no surprise that we can draw one like this:
You can think of \(C\) and \(D\) as countries with toll roads between the different cities; then an enriched profunctor \(\Phi : C \nrightarrow D\) gives us the cost of getting from any city \(c \in C\) to any city \(d \in D\). This cost is \(\Phi(c,d) \in \mathbf{Cost}\).
But to specify \(\Phi\), it's enough to specify costs of flights from
some cities in \(C\) to some cities in \(D\). That's why we just need to draw a few blue dashed edges labelled with costs. We can use this to work out the cost of going from any city \(c \in C\) to any city \(d \in D\). I hope you can guess how! Puzzle 182. What's \(\Phi(E,a)\)? Puzzle 183. What's \(\Phi(W,c)\)? Puzzle 184. What's \(\Phi(E,c)\)?
Here's a much more challenging puzzle:
Puzzle 185. In general, a \(\mathbf{Cost}\)-enriched profunctor \(\Phi : C \nrightarrow D\) is defined to be a \(\mathbf{Cost}\)-enriched functor
$$ \Phi : C^{\text{op}} \times D \to \mathbf{Cost} $$ This is a function that assigns to any \(c \in C\) and \(d \in D\) a cost \(\Phi(c,d)\). However, for this to be a \(\mathbf{Cost}\)-enriched functor we need to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category! We do this by saying that \(\mathbf{Cost}(x,y)\) equals \( y - x\) if \(y \ge x \), and \(0\) otherwise. We must also make \(C^{\text{op}} \times D\) into a \(\mathbf{Cost}\)-enriched category, which I'll let you figure out to do. Then \(\Phi\) must obey some rules to be a \(\mathbf{Cost}\)-enriched functor. What are these rules? What do they mean concretely in terms of trips between cities?
And here are some easier ones:
Puzzle 186. Are the graphs we used above to describe the preorders \(A\) and \(B\) Hasse diagrams? Why or why not? Puzzle 187. I said that \(\infty\) plays the same role in \(\textbf{Cost}\) that \(\text{false}\) does in \(\textbf{Bool}\). What exactly is this role?
By the way, people often say
\(\mathcal{V}\)-category to mean \(\mathcal{V}\)-enriched category, and \(\mathcal{V}\)-functor to mean \(\mathcal{V}\)-enriched functor, and \(\mathcal{V}\)-profunctor to mean \(\mathcal{V}\)-enriched profunctor. This helps you talk faster and do more math per hour. |
While the Data Preparation and Feature Engineering for Machine Learning course covers general data preparation, this course looks at preparation specific to clustering.
In clustering, you calculate the similarity between two examples by combining all the feature data for those examples into a numeric value. Combining feature data requires that the data have the same scale. This section looks at normalizing, transforming, and creating quantiles, and discusses why quantiles are the best default choice for transforming any data distribution. Having a default choice lets you transform your data without inspecting the data's distribution.
Normalizing Data
You can transform data for multiple features to the same scale by normalizingthe data. In particular, normalization is well-suited to processing the mostcommon data distribution, the
Gaussiandistribution. Compared toquantiles, normalization requires significantly less data to calculate.Normalize data by calculating its z-scoreas follows:
\[x'=(x-\mu)/\sigma\\ \begin{align*} \text{where:}\quad \mu &= \text{mean}\\ \sigma &= \text{standard deviation}\\ \end{align*} \]
Let’s look at similarity between examples with and without normalization. In Figure 1, you find that red appears to be more similar to blue than yellow. However, the features on the x- and y-axes do not have the same scale. Therefore, the observed similarity might be an artifact of unscaled data. After normalization using z-score, all the features have the same scale. Now, you find that red is actually more similar to yellow. Thus, after normalizing data, you can calculate similarity more accurately.
In summary, apply normalization when either of the following are true:
Your data has a Gaussian distribution. Your data set lacks enough data to create quantiles. Using the Log Transform
Sometimes, a data set conforms to a
powerlaw distribution that clumps data atthe low end. In Figure 2, red is closer to yellow than blue.
Process a power-law distribution by using a log transform. In Figure 3, the log transform creates a smoother distribution, and red is closer to blue than yellow.
Using Quantiles
Normalization and log transforms address specific data distributions. What if data doesn’t conform to a Gaussian or power-law distribution? Is there a general approach that applies to any data distribution?
Let’s try to preprocess this distribution.
Intuitively, if the two examples have only a few examples between them, then these two examples are similar irrespective of their values. Conversely, if the two examples have many examples between them, then the two examples are less similar. Thus, the similarity between two examples decreases as the number of examples between them increases.
Normalizing the data simply reproduces the data distribution because normalization is a linear transform. Applying a log transform doesn't reflect your intuition on how similarity works either, as shown in Figure 5 below.
Instead, divide the data into intervals where each interval contains an equalnumber of examples. These interval boundaries are called
quantiles.
Convert your data into quantiles by performing the following steps:
Decide the number of intervals. Define intervals such that each interval has an equal number of examples. Replace each example by the index of the interval it falls in. Bring the indexes to same range as other feature data by scaling the index values to [0,1].
After converting data to quantiles, the similarity between two examples is inversely proportional to the number of examples between those two examples. Or, mathematically, where “x” is any example in the dataset:
\(sim(A,B) \approx 1 − | \text{prob}[x > A] − \text{prob}[x > B] |\) \(sim(A,B) \approx 1 − | \text{quantile}(A) − \text{quantile}(B) |\)
Quantiles are your best default choice to transform data. However, to create quantiles that are reliable indicators of the underlying data distribution, you need a lot of data. As a rule of thumb, to create \(n\) quantiles, you should have at least \(10n\) examples. If you don't have enough data, stick to normalization.
Check Your Understanding
For the following questions, assume you have enough data to create quantiles.
Question One The data distribution is Gaussian. You have insight into what the data represents, which tells you that the data should not be transformed nonlinearly. As a result, you avoid quantiles and choose normalization instead. Question Two Missing Data
If your dataset has examples with missing values for a certain feature but such examples occur rarely, then you can remove these examples. If such examples occur frequently, we have the option to either remove this feature altogether, or to predict the missing values from other examples by using a machine learning model. For example, you can infer missing numerical data by using a regression model trained on existing feature data. |
Given an arbitrary partial order $P=(X,R)$ if for any $a,b\in X$ with $(a,b)\not\in R$ and $(b,a)\not\in R$ we define $R'=R\cup\{x\in X:(x,a)\in R\}\times \{x\in X:(b,x)\in R\}$ then I can show that $P'=(X,R')$ is an order extension of $P$ and by repeating this processes if $X$ is finite, I can then obtain all linear extensions, and show the intersection of them is equal to $P$. However my argument only works for finite partial orders, so with all of that said how can I prove every uncountable partial order is the intersection of all of its linear extenstions?
"Uncountable" has nothning to do with it; every partial order, whether finite, uncountable, or countably infinite, is the intersection of all its linear intersections.
What you need to show, of course, is that if $a$ and $b$ are two incomparable elements in the poset $P=(X,R)$, then there is a linear order $T$ of $X$ such that $R\subseteq T$ and $(a,b)\in T$ (so that $(b,a)\notin T$). You can do this in two steps.
Construct a
partialorder $S$ of $X$ suth that $R\subseteq S$ and $(a,b)\in S$.
Construct a
linearorder $T$ of $X$ such that $S\subseteq T$.
For step 1, let $S$ be the transitive closure of $S\cup\{(a,b)\}$ and prove that it's a partial order.
For step 2, if you haven't already proved that every partial order can be extended to a linear order, use Zorn's lemma to show that there is a maximal partial order extending $S$. Then show (as in step 1) that a maximal partial order on a set $X$ must be a linear order. |
Consider two parallel, independent $M/M/1/1$ queues (denoted $Q_i, Q_j$) with identical arrival rate $\lambda$ and service rate $\mu$, using FCFS (First Come First Served) discipline. Note that the last $1$ in the notation $M/M/1/1$ means that the system is of finite capacity $N = 1$. In other words, for each queue system, if there is some customer in service, no more customers can enter it.
For each customer $c$, its service-starting time, service-finishing time, and service interval are denoted by $c_{st}$, $c_{ft}$, and $[c_{st}, c_{ft}]$, respectively.
My Problems:Consider the following two concurrency related problems in such queueing system in the long run.
(1) Given customer $c$ served by $Q_i$, what is the probability that it
startsduring the service interval of some customer $c'$ served by $Q_j$ (i.e., $c_{st} \in [c'_{st}, c'_{ft}]$)?
Note that $c'$ will be unique if it exists, as shown in the figure.
(2) Conditioning on (1), what is the distribution of the service-starting time lag $c_{st} - c'_{st}$, as shown in the figure.
P.S. A solution to the first problem has been given by @user137846 at MathOverflow. However, I am not sure whether it is true or not. I am seeking more comments and detailed explanations. Edit: Although I have accepted this answer, I am not absolutely certain of its correctness. Comments and other answers are still highly appreciated. |
In classical mechanics we can describe the state of a system by a set of two numbers {\(\vec{R}, \vec{p}\)} where \(\vec{R}\) is the position of the object and \(\vec{p}\) is its momentum. The law of dynamics (given by Newton's second law, \(\sum{\vec{F}}=m\frac{d^2\vec{R}}{dt^2}\)) describes how the state of the object changes with time. The law of dynamics is deterministic. This means that if you know the initial state {\(\vec{R}_0,\vec{p}_0\)} of the system you can use the law of dynamics to fully determine the future state {\(\vec{R}(t),\vec{p}(t)\)} of the system at any future time \(t\).
The law of dynamics is reversible. This means that if you took two identical systems which start out in different initial states (say {\(\vec{R}_1(t_0),\vec{p}_1(t_0)\)} and {\(\vec{R}_2(t_0),\vec{p}_2(t_0)\)} respectively), they will evolve with time according to the law of dynamics in such a way that they remain in different states. Suppose however the states {\(\vec{R}_1(t_0),\vec{p}_1(t_0)\)} and {\(\vec{R}_2(t_0),\vec{p}_2(t_0)\)} evolved into the same state {\(\vec{R}(t),\vec{p}(t)\)}. If the only information you had was {\(\vec{R}(t),\vec{p}(t)\)} then, using the law of dynamics, there would be no way to know for sure whether this state evolved from {\(\vec{R}_1(t_0),\vec{p}_1(t_0)\) or {\(\vec{R}_2(t_0),\vec{p}_2(t_0)\)}. Because Newton’s law of dynamics is reversible this will not happen and the two systems will stay in different states (say {\(\vec{R}_1(t),\vec{p}_1(t)\)} and {\(\vec{R}_2(t),\vec{p}_2(t)\)}) for all times \(t\). A consequence of this is that if you know the state of either system at time \(t\) you can always use the law of dynamics to determine the state of either system at an earlier time \(t_0\).
In quantum dynamics we assume that the states of different isolated systems are deterministic and reversible. (Do not confuse the states being deterministic with the measurements being deterministic—the latter, of course, is not deterministic.) If two different systems start out in two different states \(|\psi(t_0)⟩\) and \(|\phi(t_0)⟩\) then they will remain in different states and \(|\psi(t)⟩\) and \(|\phi(t)⟩\) will stay different for all times \(t\). A consequence of this is that the inner product \(⟨\psi(t)|\phi(t)⟩\) will remain unchanged.
Scrodinger's time-dependent equation is the single most important equation in quantum mechanics. It is used to determine how any state \(|\psi(t)⟩\) of a quantum system changes with time; at all times \(t\), you'll know what \(|\psi(t)⟩\) is. This equation is also used to determine the probability \(P(L,t)\) of measuring any physical quantity \(L\) at any time \(t\). What the two functions \(|\psi(t)⟩\) and \(P(L,t)\) are depends on the total energy of the system (which is associated with the Energy operator \(\hat{E}\)) and the initial state \(|\psi(0)⟩\) of the system. All you need are these two initial conditions to determine the entire future of the system. In classical mechanics the state of a particle is specified by two numbers—the position \(\vec{R}\)
In quantum mechanics if we knew the initial state \(|\psi_i(0)⟩\) of every particle in the universe, we could use the quantum analogue of Newton's second law—namely Schrodinger's time-dependent equation—to determine the future state \(|\psi_i(t)⟩\) of every particle in the universe. But where quantum mechanics differs from classical mechanics is that the state \(|\psi_i(t)⟩\) does not encapsulate everything about the system—rather it encapsulates everything that
can be known about the system, which isn't everything. Each particle would have its own wavefunction \(\psi_{i,j}(t)\); in general there would be a probability amplitude associated with any physical measurement. Although the probability function \(P(L,t)\) is deterministic, the measurement of any physical quantity \(L\) is inherently probabilistic. Therefore there is an inherent randomness built into the cosmos.
This article is licensed under a CC BY-NC-SA 4.0 license. |
In mathematics, the
base flow of a random dynamical system is the dynamical system defined on the "noise" probability space that describes how to "fast forward" or "rewind" the noise when one wishes to change the time at which one "starts" the random dynamical system. Definition
In the definition of a random dynamical system, one is given a family of maps \vartheta_{s} : \Omega \to \Omega on a probability space (\Omega, \mathcal{F}, \mathbb{P}). The measure-preserving dynamical system (\Omega, \mathcal{F}, \mathbb{P}, \vartheta) is known as the
base flow of the random dynamical system. The maps \vartheta_{s} are often known as shift maps since they "shift" time. The base flow is often ergodic.
The parameter s may be chosen to run over
\mathbb{R} (a two-sided continuous-time dynamical system); [0, + \infty) \subsetneq \mathbb{R} (a one-sided continuous-time dynamical system); \mathbb{Z} (a two-sided discrete-time dynamical system); \mathbb{N} \cup \{ 0 \} (a one-sided discrete-time dynamical system).
Each map \vartheta_{s} is required
to be a (\mathcal{F}, \mathcal{F})-measurable function: for all E \in \mathcal{F}, \vartheta_{s}^{-1} (E) \in \mathcal{F} to preserve the measure \mathbb{P}: for all E \in \mathcal{F}, \mathbb{P} (\vartheta_{s}^{-1} (E)) = \mathbb{P} (E).
Furthermore, as a family, the maps \vartheta_{s} satisfy the relations
\vartheta_{0} = \mathrm{id}_{\Omega} : \Omega \to \Omega, the identity function on \Omega; \vartheta_{s} \circ \vartheta_{t} = \vartheta_{s + t} for all s and t for which the three maps in this expression are defined. In particular, \vartheta_{s}^{-1} = \vartheta_{-s} if - s exists.
In other words, the maps \vartheta_{s} form a commutative monoid (in the cases s \in \mathbb{N} \cup \{ 0 \} and s \in [0, + \infty)) or a commutative group (in the cases s \in \mathbb{Z} and s \in \mathbb{R}).
Example
In the case of random dynamical system driven by a Wiener process W : \mathbb{R} \times \Omega \to X, where (\Omega, \mathcal{F}, \mathbb{P}) is the two-sided classical Wiener space, the base flow \vartheta_{s} : \Omega \to \Omega would be given by
W (t, \vartheta_{s} (\omega)) = W (t + s, \omega) - W(s, \omega).
This can be read as saying that \vartheta_{s} "starts the noise at time s instead of time 0".
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
It looks like you're new here. If you want to get involved, click one of these buttons!
Okay, now I've rather carefully discussed one example of \(\mathcal{V}\)-enriched profunctors, and rather sloppily discussed another. Now it's time to build the general framework that can handle both these examples.
We can define \(\mathcal{V}\)-enriched categories whenever \(\mathcal{V}\) is a monoidal preorder: we did that way back in Lecture 29. We can also define \(\mathcal{V}\)-enriched functors whenever \(\mathcal{V}\) is a monoidal preorder: we did that in Lecture 31. But to define \(\mathcal{V}\)-enriched profunctors, we need \(\mathcal{V}\) to be a bit better. We can see why by comparing our examples.
Our first example involved \(\mathcal{V} = \textbf{Bool}\). A
feasibility relation
$$ \Phi : X \nrightarrow Y $$ between preorders is a monotone function
$$ \Phi: X^{\text{op}} \times Y\to \mathbf{Bool} . $$ We shall see that a feasibility relation is the same as a \( \textbf{Bool}\)-enriched profunctor.
Our second example involved \(\mathcal{V} = \textbf{Cost}\). I said that a \( \textbf{Cost}\)-enriched profunctor
$$ \Phi : X \nrightarrow Y $$ between \(\mathbf{Cost}\)-enriched categories is a \( \textbf{Cost}\)-enriched functor
$$ \Phi: X^{\text{op}} \times Y \to \mathbf{Cost} $$ obeying some conditions. But I let you struggle to guess those conditions... without enough clues to make it easy!
To fit both our examples in a general framework, we start by considering an arbitrary monoidal preorder \(\mathcal{V}\). \(\mathcal{V}\)-enriched profunctors will go between \(\mathcal{V}\)-enriched categories. So, let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(\mathcal{V}\)-enriched categories. We want to make this definition:
Tentative Definition. A \(\mathcal{V}\)-enriched profunctor
$$ \Phi : \mathcal{X} \nrightarrow \mathcal{Y} $$ is a \(\mathcal{V}\)-enriched functor
$$ \Phi: \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} .$$ Notice that this handles our first example very well. But some questions appear in our second example - and indeed in general. For our tentative definition to make sense, we need three things:
We need \(\mathcal{V}\) to itself be a \(\mathcal{V}\)-enriched category.
We need any two \(\mathcal{V}\)-enriched category to have a 'product', which is again a \(\mathcal{V}\)-enriched category.
We need any \(\mathcal{V}\)-enriched category to have an 'opposite', which is again a \(\mathcal{V}\)-enriched category.
Items 2 and 3 work fine whenever \(\mathcal{V}\) is a commutative monoidal poset. We'll see why in Lecture 62.
Item 1 is trickier, and indeed it sounds rather scary. \(\mathcal{V}\) began life as a humble monoidal preorder. Now we're wanting it to be
enriched in itself! Isn't that circular somehow?
Yes! But not in a bad way. Category theory often eats its own tail, like the mythical ourobous, and this is an example.
To get \(\mathcal{V}\) to become a \(\mathcal{V}\)-enriched category, we'll demand that it be 'closed'. For starters, let's assume it's a monoidal
poset, just to avoid some technicalities. Definition. A monoidal poset is closed if for all elements \(x,y \in \mathcal{V}\) there is an element \(x \multimap y \in \mathcal{V}\) such that
$$ x \otimes a \le y \text{ if and only if } a \le x \multimap y $$ for all \(a \in \mathcal{V}\).
This will let us make \(\mathcal{V}\) into a \(\mathcal{V}\)-enriched category by setting \(\mathcal{V}(x,y) = x \multimap y \). But first let's try to understand this concept a bit!
We can check that our friend \(\mathbf{Bool}\) is closed. Remember, we are making it into a monoidal poset using 'and' as its binary operation: its full name is \( \lbrace \text{true},\text{false}\rbrace, \wedge, \text{true})\). Then we can take \( x \multimap y \) to be 'implication'. More precisely, we say \( x \multimap y = \text{true}\) iff \(x\) implies \(y\). Even more precisely, we define:
$$ \text{true} \multimap \text{true} = \text{true} $$$$ \text{true} \multimap \text{false} = \text{false} $$$$ \text{false} \multimap \text{true} = \text{true} $$$$ \text{false} \multimap \text{false} = \text{true} . $$
Puzzle 188. Show that with this definition of \(\multimap\) for \(\mathbf{Bool}\) we have
$$ a \wedge x \le y \text{ if and only if } a \le x \multimap y $$ for all \(a,x,y \in \mathbf{Bool}\).
We can also check that our friend \(\mathbf{Cost}\) is closed! Remember, we are making it into a monoidal poset using \(+\) as its binary operation: its full name is \( [0,\infty], \ge, +, 0)\). Then we can define \( x \multimap y \) to be 'subtraction'. More precisely, we define \(x \multimap y\) to be \(y - x\) if \(y \ge x\), and \(0\) otherwise.
Puzzle 189. Show that with this definition of \(\multimap\) for \(\mathbf{Cost}\) we have
$$ a + x \le y \text{ if and only if } a \le x \multimap y . $$But beware. We have defined the ordering on \(\mathbf{Cost}\) to be the
opposite of the usual ordering of numbers in \([0,\infty]\). So, \(\le\) above means the opposite of what you might expect!
Next, two more tricky puzzles. Next time I'll show you in general how a closed monoidal poset \(\mathcal{V}\) becomes a \(\mathcal{V}\)-enriched category. But to appreciate this, it may help to try some examples first:
Puzzle 190. What does it mean, exactly, to make \(\mathbf{Bool}\) into a \(\mathbf{Bool}\)-enriched category? Can you see how to do this by defining
$$ \mathbf{Bool}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Bool}\), where \(\multimap\) is defined to be 'implication' as above?
Puzzle 191. What does it mean, exactly, to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category? Can you see how to do this by defining
$$ \mathbf{Cost}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Cost}\), where \(\multimap\) is defined to be 'subtraction' as above?
Note: for Puzzle 190 you might be tempted to say "a \(\mathbf{Bool}\)-enriched category is just a preorder, so I'll use that fact here". However, you may learn more if you go back to the general definition of enriched category and use that! The reason is that we're trying to understand some general things by thinking about two examples.
Puzzle 192. The definition of 'closed' above is an example of a very important concept we keep seeing in this course. What is it? Restate the definition of closed monoidal poset in a more elegant, but equivalent, way using this concept. |
Consider the following problem taken from a problem booklet. My questions are:
What is displacement vector? And how to determine the direction of displacement vector at a certain point? Where is the position with zero displacement vector?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
Any material between two nodes is displaced by the same direction. So the direction of
B and C has to be the same as well as the direction of A and D due to symmetry. In addition, the direction of A must be the opposite of B since they are across from a node. Similarly the direction of C and D must be opposite.
So the two possible configurations are
A--> <--B <--C D--> (figure d)<--A B--> C--> <--D (figure c)
The correct answer is
(2).
Just because a wave is a standing wave doesn't necessarily mean that the particles themselves do not move, in face if the particles themselves didn't move there wouldn't be any wave motion at all. For a longitudinal wave (made of particles that oscillate in the direction of wave propagation like here as sound waves) particles oscillate left and right but have no net displacement.
Take a look at this site on standing waves in compression/longitudinal motion, perhaps it will help you understand what the answer is and why it is the correct one.
A standing wave is a wave that has nodes. The points of the wave go up and down in some places, and remain at zero at others (the nodes). The general form of a standing wave is a sine curve that remains at a fixed position, but its amplitude changes in time between $+A_0$ and $-A_0$. Specifially, there is a time where the wave form is completely flat.
The formula is something like
$$f(x) = A_0\cos(\omega t)\,sin(kx)$$
(not the most general form). Compare to a moving wave which has a fixed amplitude, but a changing offset, so it seems to move along the axis.
$$ f(x) = A_0\sin(\omega t + kx)$$
Now in your case you have a tube with air. Your waves don't go up and down (transversal), but back and forth (longitudinal). The nodes are points where the air doesn't move, anti-nodes are where the air moves maximally. Still, it can be described by the same equation. You can try to draw a sine-curve through your first figure. The $y$ value should be the air displacement at point $x$, at a fixed time ($t=0$ or $t=\pi/\omega$). The sine curve must cross the $x$ axis at the nodes, and have maxima and minima at the antinodes. There are two ways to draw the curve, which are mirrored along the $x$ axis. A positive displacement means that the air molecules are moved to the right (compared to where they should be at $t=(\pi/2)/\omega$), a negative displacement means they are moved to the left. You should be able to read off the correct displacement vectors from your drawing.
A little caveat: Don't confuse displacement and pressure, or speed. The nodes always have zero displacement, but the pressure there changes all the time. The points A, B, C, D (on the slopes of the curve) sometimes have zero displacement, when the waveform crosses the $x$-axis, but at that moment the air has the highest speed (change of displacement). |
As in your question the stress was on the word
general, I have some bad news:an efficient " general solver (or a theoretical algorithm) for (...) extended Ising models, which involves an arbitrary lattice" does not exists.
Of course, one can invent algorithms that, in principle, could find the ground state. The most trivial would be checking the energy of all configurations (I guess this is what you referred to by
exact enumeration). However, the time used by this algorithm would scale exponentially with the number of sites. One may ask - as you did - whether one can do something substantially better, e.g., finding an algorithm where the "used time" scales polynomially with the number of sites. Unfortunately, there is no such algorithm (if we assume that NP$\ne$P). Already for the ordinary Ising Hamiltonian
$$H= \sum_{ab} J_{ab}S_aS_b $$
with $J_{ab}\in\{+1,-1,0\}$ and with connectivity on an $L\times L\times 2$ cubic lattice, it was shown that finding the ground state is an NP hard problem. The proof of this can be found here:
F. Barahona,
On the computational complexity of Ising spin glass models, J. Phys. A: Math. Gen., 15 3241 (1982).
Of course, if you don't consider the general case, but restrict your attention to a set of "easier" lattices or graphs (e.g. to planar graphs), then there could be a polynomial time algorithms (depending on the structure of the specific restricted cases). |
e+e-</EM> Annihilation near Threshold > Top quark polarization in $e^+e^-$ annihilation into $t\bar t$is calculated for linearly polarized beams.The Green function formalism is appliedto this reaction near threshold.The Lippmann–Schwinger equations for the $S$-wave and$P$-wave Green functions are solved numerically for theQCD chromostatic potential given by the two-loop formulafor large momentum transfer and Richardson's ansatzfor intermediate and small momenta.$S$-$P$–wave interference contributes to allcomponents of the top quark polarization vector. Rescattering ofthe decay products is considered.The mean values $\langle n \ell \rangle$of the charged lepton four-momentum projections on appropriately chosendirections $n$ in semileptonic top decays are proposed as experimentallyobservable quantities sensitiveto top quark polarization.The results for $\langle n \ell \rangle$ are obtainedincluding $S$-$P$–wave interference and rescattering of the decayproducts.It is demonstrated that for the longitudinally polarizedelectron beam a highly polarized sample of top quarks can
Recent calculations are presented of top quark polarization in$t\bar t$ pair production close to threshold. S–P-wave interferencegives contributions to all components of the top quarkpolarization vector. Rescattering of the decay products is considered.Moments of the fourmomentum of the charged lepton in semileptonictop decays are calculated and shown to be very sensitive to the top
M. Jezabek, R. Harlander, J.H. Kuehn and M. Peter
Proceedings of the Workshop on “Physics and Experiments at Linear Colliders”, Morioka-Appi, Japan, Sept. 1995, pp. 436-446
TTP95-46 TOP QUARK PAIR PRODUCTION IN THE THRESHOLD REGION
TTP95-46 TOP QUARK PAIR PRODUCTION IN THE THRESHOLD REGION
Recent results on production and decays of polarized top quarksare reviewed.Top quark pair production in $e^+e^-$ annihilation is considerednear energy threshold.For longitudinallypolarized electrons the produced top quarks and antiquarksare highly polarized.Dynamical effects originating from strong interactionsand Higgs boson exchange in the$t-\bar t$ system can be calculated using the Green function method.Energy-angular distributions of leptons in semileptonic decaysare sensitive to the polarization of the decaying topquark and to the Lorentz structure of the weak charged current.
Marek Jezabek
Proceedings of the EPS Conference on HEP, Brussels, July 1995, J. Lemonne et al. eds., World Scientific 1996, pp. 671-673
TTP95-44 The Scalar Contribution to $\tau\to K\pi\nu_\tau$
TTP95-44 The Scalar Contribution to $\tau\to K\pi\nu_\tau$
We consider the scalar form factor in $\tau \to K\pi \nu_\tau$decays. It receives contributions both from the scalar resonance$K_0^*(1430)$ and from the scalar projection of off-shell vector resonances.We construct a model for the hadronic current which includes the vectorresonances $K^*(892)$ and $K^*(1410)$ and the scalar resonance$K_0^*(1430)$. The parameters of the model are fixedby matching to the $O(p^4)$ predictions of chiral perturbation theory.Suitable angular correlations of the $K\pi$ systemallow for a model independent separation of the vector and scalarform factor. Numerical results for the relevantstructure functions are presented.
TTP95-42 Dijet Production at HERA in Next-to-Leading Order
TTP95-42 Dijet Production at HERA in Next-to-Leading Order
Two-jet cross sections in deep inelastic scattering at HERA are calculated innext-to-leading order. The QCD corrections are implemented in a new$ep\rightarrow n$ jets event generator, MEPJET, which allows to analyzearbitrary jet definition schemes and general cuts in terms of parton4-momenta. First results are presented for the JADE, the cone and the $k_T$schemes. For the $W$-scheme, disagreement with previous results and largeradiative corrections and recombination scheme ambiguties are traced to acommon origin.
TTP95-41 Heavy Quark Vacuum Polarization to Three Loops
TTP95-41 Heavy Quark Vacuum Polarization to Three Loops
The real and imaginary part of the vacuum polarization function$\Pi(q^2)$ induced by a massive quark is calculated in perturbativeQCD up to order $\alpha_s^2$. The method is described and theresults are presented. This extends the calculation byK\“all\'en and Sabry from two to three loops.
We review current issues in exclusive semileptonic tau decays.We present the formalism of structure functions,and then discuss predictions for final states with kaons,for decays into four pions and forradiative corrections to the decay into asingle pion.
J.H. K\“uhn, E. Mirkes and M. Finkemeier
Proceedings of the EPS Conference on HEP, Brussels, July 1995, J. Lemonne et al. eds., World Scientific 1996, pp. 631-635
TTP95-37 Radiation of Light Fermions in Heavy Fermion Production
TTP95-37 Radiation of Light Fermions in Heavy Fermion Production
Recent analytic calculations onthe rate for the production of a pair of massive fermions in $e^+ e^-$annihilation plus real or virtual radiation of a pair of masslessfermions are discussed.The contributions for real and virtual radiation are displayed separately.The asymptotic behaviour close to threshold is given in a compact form andan application to the angular distribution of massive quarks close tothreshold is presented.
A.H. Hoang, J.H. K\“uhn (Karlsruhe U., TTP) and T. Teubner (Durham U.)
Proceedings of the EPS Conference on HEP, Brussels, July 1995, J. Lemonne et al. eds, World Scientific 1996, pp. 343-344
TTP95-36 Hadronic Decays of Excited Heavy Quarkonia
TTP95-36 Hadronic Decays of Excited Heavy Quarkonia
We construct an effective Lagrangian for the hadronic decays of a heavyexcited $s$-wave-spin-one quarkonium $\Psi'$ into a lower $s$-wave-spin-onestate $\Psi$. We show that reasonable fits to the measured invariant massspectra in the charmonium andbottomonium systems can be obtained within this framework. The massdependence of the various terms in the Lagrangian is discussed on thebasis of a quark model.
The electromagnetic corrections to the masses of the pseudoscalarmesons $\pi$ and $K$ are considered. We calculate in chiralperturbation theory the contributions which arise from resonanceswithin a photon loop at order $O(e^2 m_q)$.Within this approach we find rather moderate deviations to Dashen's
TTP95-26 ANGULAR DISTRIBUTIONS OF MASSIVE QUARKS AND LEPTONS CLOSE TO THRESHOLD
TTP95-26 ANGULAR DISTRIBUTIONS OF MASSIVE QUARKS AND LEPTONS CLOSE TO THRESHOLD
Predictions for the angular distribution of massive quarks andleptons are presented, including QCD and QED corrections. Recentresults for the fermionic part of the two-loop corrections to theelectromagnetic form factors are combined with the BLM scale fixingprescription. Two distinctly different scales arise as arguments of$\alpha_s(\mu^2)$ near threshold: the relative momentum of thequarks governing the soft gluon exchange responsible for the Coulombpotential, and a large momentum scale approximately equal to twicethe quark mass for the corrections induced by transverse gluons.Numerical predictions for charmed, bottom, and top quarks are given.One obtains a direct determination of $\alpha_{\mbox{\it\scriptsizeV}}(Q^2)$, the coupling in the heavy quark potential, which can becompared with lattice gauge theory predictions. The correspondingQED results for $\tau$ pair production allow for a measurement ofthe magnetic moment of the $\tau$ and could be tested at a future$\tau$-charm factory.
S.J. Brodsky, A.H. Hoang, J.H. K\“uhn and T. Teubner,
TTP95-24 Fragmentation production of doubly heavy baryons
TTP95-24 Fragmentation production of doubly heavy baryons
Baryons with a single heavy quark are being studied experimentally at present.Baryons with two units of heavy flavor will be abundantly produced not only atfuture colliders, but also at existing facilities. In this paper we study theproduction via heavy quark fragmentation of baryons containing two heavy quarksat the Tevatron, the LHC, HERA, and the NLC. The production rate is woefullysmall at HERA and at the NLC, but significant at $pp$ and $p\bar{p}$ machines.We present distributions in various kinematical variables in addition to theintegrated cross sections at hadron colliders.
TTP95-20 Three-loop QCD Corrections to \delta\rho, \DELTA r and \Delta\kappa
TTP95-20 Three-loop QCD Corrections to \delta\rho, \DELTA r and \Delta\kappa
QCD corrections to electroweak observables are reviewed. Recent resultson contributions from the top-bottom doubletof ${\cal O}(\as^2)$ to $\drho$, $\Delta r$ and $\Delta\kappa$ are presented.It is demonstrated that the first three termsin the expansion in $M_Z^2/M_t^2$ provide an excellent approximation tothe exact result.Calculational techniques are briefly discussed.
K.G. Chetyrkin, J.H. Kuehn, M. Steinhauser
Proceedings of the Workshop “Perspectives for Electroweak Interactions in e+e- Collisions”, B. A. Kniehl, ed., World Scientific 1995, pp. 97-108 For conference proceedings.
TTP95-17 HADRON RADIATION IN TAU PRODUCTION AND THE LEPTONIC Z BOSON DECAY RATE
TTP95-17 HADRON RADIATION IN TAU PRODUCTION AND THE LEPTONIC Z BOSON DECAY RATE
Secondary radiation of hadrons from a tau pair producedin electron positron collisions may constitute animportant obstacle for precision measurements ofthe production cross section and of branching ratios.The rate for real and virtual radiation is calculatedand various distributions are presented.For Z decays a comprehensive analysis is performed whichincorporates real and virtual radiation ofleptons. The corresponding results are also given forprimary electron and muon pairs.Compact analytical formulae are presented forentirely leptonic configurations.Measurements of $Z$ partial decay rates which eliminateall hadron and lepton radiation are about 0.3\% to 0.4\%lower than totally inclusive measurements, a consequence ofthe ${\cal O}(\alpha^2)$ negative virtual corrections whichare enhanced by the third power of a large logarithm.
Possibilites for measuring the $J^{PC}$ quantumnumbers of the Higgs particle through its interactions withgauge bosons and with fermions are discussed.Observables which indicate CP violation in these couplingsare also identified.
M. L. Stong
Proceedings of the Workshop “Perspectives for Electroweak Interactions in e+e- Collisions”, B. A. Kniehl, ed., World Scientific 1995, pp. 317-328
TTP95-13 QCD Corrections from Top Quark to Relations between Electroweak Parameters to Order $\as^2$
TTP95-13 QCD Corrections from Top Quark to Relations between Electroweak Parameters to Order $\as^2$
The vacuum polarization functions $\Pi(q^2)$ of charged and neutralgauge bosons which arise from top and bottom quark loops lead toimportant shifts in relations between electroweak parameterswhich can be measured with ever-increasing precision. The largemass of the top quark allows approximation of these functions throughthe first two terms of an expansion in $M_Z^2/M_t^2$.The first three terms of the Taylor series of $\Pi(q^2)$ areevaluated analytically up to order $\as^2$.The first two are required to derive the approximation, thethird can be used to demonstrate the smallness of the neglected terms.The paperimproves earlier resultsbased on the leading term $\propto G_F M_t^2 \as^2$.Results for the subleadingcontributions to $\dr$ and the effective mixing angle $\sineff$ are presented.
Recent theoretical results on the production and decay of top quarks arepresented. The implications of the new experimental results from theTEVATRON are briefly discussed. Predictions for the top quark decayrate and distributions are described, including the influence of QCD andelectroweak radiative corrections.Top production at an $e^+e^-$ collider is discussed withemphasis towards the threshold region.The polarization of top quarks in the threshold regionis calculated with techniques based on Green's functions for $S$ and$P$ waves.
TTP95-10 RADIATION OF LIGHT FERMIONS IN HEAVY FERMION PRODUCTION
TTP95-10 RADIATION OF LIGHT FERMIONS IN HEAVY FERMION PRODUCTION
The rate for the production of a pair of massive fermions in $e^+ e^-$annihilation plus real or virtual radiation of a pair of masslessfermions is calculated analytically.The contributions for real and virtual radiation are displayed separately.The asymptotic behaviour close to threshold and for high energiesis given in a compact form. These approximations providearguments for the appropriate choice of the scale inthe ${\cal O}(\alpha)$ result, such that no large logarithms remain in thefinal answer.
TTP95-09 Approximating the radiatively corrected Higgs mass in the Minimal Supersymmetric Model
TTP95-09 Approximating the radiatively corrected Higgs mass in the Minimal Supersymmetric Model
To obtain the most accurate predictions for the Higgs masses in theminimal supersymmetric model (MSSM), one should compute the full set ofone-loop radiative corrections, resum the large logarithms to allorders, and add the dominant two-loop effects. A completecomputation following this procedure yields a complex set of formulaewhich must be analyzed numerically. We discuss a very simpleapproximation scheme which includes the most important terms fromeach of the three components mentioned above. We estimate that theHiggs masses computed using our scheme lie within 2 GeV of theirtheoretically predicted values over a very large fraction of MSSM
TTP95-08 Rho - omega mixing in chiral perturbation theory
TTP95-08 Rho - omega mixing in chiral perturbation theory
In order to calculate the $\rho^0 -\omega$ mixing we extend the chiralcouplings of the low-lying vector mesonsin chiral perturbation theory to a lagrangian thatcontains two vector fields. We determine the $p^2$ dependence of thetwo-point function and recover an earlier result for the on-shell expression.
TTP95-05 QCD Corrections to Electroweak Annihilation Decays of Superheavy Quarkonia
TTP95-05 QCD Corrections to Electroweak Annihilation Decays of Superheavy Quarkonia
QCD corrections to all the allowed decays of superheavy groundstate quarkoniainto electroweak gauge and Higgs bosons are presented. For quick estimates,approximations that reproduce the exact results within less than at worsttwo percent are also given.
TTP95-04 Recent Results on QCD Corrections to Semileptonic $b$-Decays
TTP95-04 Recent Results on QCD Corrections to Semileptonic $b$-Decays
We summarize recent results on QCD corrections to variousobservables in semileptonic $b$ quark decays. For massless leptons inthe final state we present effects of such corrections on tripledifferential distribution of leptons which are important in studies ofpolarized $b$ quark decays. Analogous formulas for distributions ofneutrinos are applicable in decays of polarized $c$ quarks. In thecase of decays with a $\tau$ lepton in the final state mass effect of$\tau$ has to be included. In this case we concentrate on corrections
Andrzej Czarnecki and Marek Jezabek
138th WE-Heraeus Seminar: Heavy Quark Physics, eds. J. Körner, P. Kroll, World Scientific 1995, 67-74
The three-loop QCD corrections to the $\rho$ parameter from top andbottom quark loops are calculated.The result differs from the one recently calculatedby Avdeev et al. As function of the pole mass the numerical value is given by$\drho=\frac{3G_F M_t^2}{8\sqrt{2}\pi^2}(1- 2.8599\, \api- 14.594\, (\api)^2 )$.
TTP95-02 Spectra of baryons containing two heavy quarks.
TTP95-02 Spectra of baryons containing two heavy quarks.
The spectra of baryons containing two heavy quarks test the form ofthe $QQ$ potential through the spin-averaged masses and hyperfinesplittings. The mass splittings in these spectra are calculatedin a nonrelativistic potential model and the effects of varying thepotential studied. The simple description in terms of light quark |
Skills to Develop
Express products as sums. Express sums as products.
A band marches down the field creating an amazing sound that bolsters the crowd. That sound travels as a wave that can be interpreted using trigonometric functions.
Figure \(\PageIndex{1}\): The UCLA marching band (credit: Eric Chan, Flickr).
For example, Figure \(\PageIndex{2}\)
Figure \(\PageIndex{2}\) Expressing Products as Sums
We have already learned a number of formulas useful for expanding or simplifying trigonometric expressions, but sometimes we may need to express the product of cosine and sine as a sum. We can use the
product-to-sum formulas, which express products of trigonometric functions as sums. Let’s investigate the cosine identity first and then the sine identity. Expressing Products as Sums for Cosine
We can derive the product-to-sum formula from the sum and difference identities for
cosine. If we add the two equations, we get:
\[\begin{align*} \cos \alpha \cos \beta+\sin \alpha \sin \beta&= \cos(\alpha-\beta)\\[4pt] \underline{+ \cos \alpha \cos \beta-\sin \alpha \sin \beta}&= \underline{ \cos(\alpha+\beta) }\\[4pt] 2 \cos \alpha \cos \beta&= \cos(\alpha-\beta)+\cos(\alpha+\beta)\end{align*}\]
Then, we divide by 2 to isolate the product of cosines:
\[ \cos \alpha \cos \beta= \dfrac{1}{2}[\cos(\alpha-\beta)+\cos(\alpha+\beta)] \label{eq1}\]
How to: Given a product of cosines, express as a sum
Write the formula for the product of cosines. Substitute the given angles into the formula. Simplify.
Example \(\PageIndex{1}\): Writing the Product as a Sum Using the Product-to-Sum Formula for Cosine
Write the following product of cosines as a sum: \(2\cos\left(\dfrac{7x}{2}\right) \cos\left(\dfrac{3x}{2}\right)\).
Solution
We begin by writing the formula for the product of cosines (Equation \ref{eq1}):
\[ \cos \alpha \cos \beta = \dfrac{1}{2}[ \cos(\alpha-\beta)+\cos(\alpha+\beta) ] \nonumber \]
We can then substitute the given angles into the formula and simplify.
\[\begin{align*} 2 \cos\left(\dfrac{7x}{2}\right)\cos\left(\dfrac{3x}{2}\right)&= 2\left(\dfrac{1}{2}\right)[ \cos\left(\dfrac{7x}{2}-\dfrac{3x}{2}\right)+\cos\left(\dfrac{7x}{2}+\dfrac{3x}{2}\right) ]\\[4pt] &= \cos\left(\dfrac{4x}{2}\right)+\cos\left(\dfrac{10x}{2}\right) \\[4pt] &= \cos 2x+\cos 5x \end{align*}\]
Exercise \(\PageIndex{1}\)
Use the product-to-sum formula (Equation \ref{eq1}) to write the product as a sum or difference: \(\cos(2\theta)\cos(4\theta)\).
Answer
\(\dfrac{1}{2}(\cos 6\theta+\cos 2\theta)\)
Expressing the Product of Sine and Cosine as a Sum
Next, we will derive the product-to-sum formula for sine and cosine from the sum and difference formulas for
sine. If we add the sum and difference identities, we get:
\[\begin{align*} \cos \alpha \cos \beta+\sin \alpha \sin \beta&= \cos(\alpha-\beta)\\[4pt] \underline{+ \cos \alpha \cos \beta-\sin \alpha \sin \beta}&= \cos(\alpha+\beta)\\[4pt] 2 \cos \alpha \cos \beta&= \cos(\alpha-\beta)+\cos(\alpha+\beta)\\[4pt] \text{Then, we divide by 2 to isolate the product of cosines:}\\[4pt] \cos \alpha \cos \beta&= \dfrac{1}{2}\left[\cos(\alpha-\beta)+\cos(\alpha+\beta)\right] \end{align*}\]
Example \(\PageIndex{2}\): Writing the Product as a Sum Containing only Sine or Cosine
Express the following product as a sum containing only sine or cosine and no products: \(\sin(4\theta)\cos(2\theta)\).
Solution
Write the formula for the product of sine and cosine. Then substitute the given values into the formula and simplify.
\[\begin{align*} \sin \alpha \cos \beta&= \dfrac{1}{2}[ \sin(\alpha+\beta)+\sin(\alpha-\beta) ]\\[4pt] \sin(4\theta)\cos(2\theta)&= \dfrac{1}{2}[\sin(4\theta+2\theta)+\sin(4\theta-2\theta)]\\[4pt] &= \dfrac{1}{2}[\sin(6\theta)+\sin(2\theta)] \end{align*}\]
Exercise \(\PageIndex{2}\)
Use the product-to-sum formula to write the product as a sum: \(\sin(x+y)\cos(x−y)\).
Answer
\(\dfrac{1}{2}(\sin 2x+\sin 2y)\)
Expressing Products of Sines in Terms of Cosine
Expressing the product of sines in terms of
cosine is also derived from the sum and difference identities for cosine. In this case, we will first subtract the two cosine formulas:
\[\begin{align*} \cos(\alpha-\beta)&= \cos \alpha \cos \beta+\sin \alpha \sin \beta\\[4pt] \underline{-\cos(\alpha+\beta)}&= -(\cos \alpha \cos \beta-\sin \alpha \sin \beta)\\[4pt] \cos(\alpha-\beta)-\cos(\alpha+\beta)&= 2 \sin \alpha \sin \beta\\[4pt] \text{Then, we divide by 2 to isolate the product of sines:}\\[4pt] \sin \alpha \sin \beta&= \dfrac{1}{2}[ \cos(\alpha-\beta)-\cos(\alpha+\beta) ] \end{align*}\]
Similarly we could express the product of cosines in terms of sine or derive other product-to-sum formulas.
THE PRODUCT-TO-SUM FORMULAS
The
product-to-sum formulas are as follows:
\[\cos \alpha \cos \beta=\dfrac{1}{2}[\cos(\alpha−\beta)+\cos(\alpha+\beta)]\]
\[\sin \alpha \cos \beta=\dfrac{1}{2}[\sin(\alpha+\beta)+\sin(\alpha−\beta)]\]
\[\sin \alpha \sin \beta=\dfrac{1}{2}[\cos(\alpha−\beta)−\cos(\alpha+\beta)]\]
\[\cos \alpha \sin \beta=\dfrac{1}{2}[\sin(\alpha+\beta)−\sin(\alpha−\beta)]\]
Exercise \(\PageIndex{3}\)
Use the product-to-sum formula to evaluate \(\cos \dfrac{11\pi}{12} \cos \dfrac{\pi}{12}\).
Answer
\(\dfrac{−2−\sqrt{3}}{4}\)
Expressing Sums as Products
Some problems require the reverse of the process we just used. The
sum-to-product formulas allow us to express sums of sine or cosine as products. These formulas can be derived from the product-to-sum identities. For example, with a few substitutions, we can derive the sum-to-product identity for sine. Let \(\dfrac{u+v}{2}=\alpha\) and \(\dfrac{u−v}{2}=\beta\).
Then,
\[\begin{align*} \alpha+\beta&= \dfrac{u+v}{2}+\dfrac{u-v}{2}\\[4pt] &= \dfrac{2u}{2}\\[4pt] &= u \end{align*}\]
\[\begin{align*} \alpha-\beta&= \dfrac{u+v}{2}-\dfrac{u-v}{2}\\[4pt] &= \dfrac{2v}{2}\\[4pt] &= v \end{align*}\]
Thus, replacing \(\alpha\) and \(\beta\) in the product-to-sum formula with the substitute expressions, we have
\[\begin{align*} \sin \alpha \cos \beta&= \dfrac{1}{2}[\sin(\alpha+\beta)+\sin(\alpha-\beta)]\\[4pt] \sin \left ( \frac{u+v}{2} \right ) \cos \left ( \frac{u-v}{2} \right )&= \frac{1}{2}[\sin u + \sin v]\qquad \text{Substitute for } (\alpha+\beta) \text{ and } (\alpha\beta)\\[4pt] 2\sin\left(\dfrac{u+v}{2}\right) \cos\left(\dfrac{u-v}{2}\right)&= \sin u+\sin v \end{align*}\]
The other sum-to-product identities are derived similarly.
Exercise \(\PageIndex{4}\)
Use the sum-to-product formula to write the sum as a product: \(\sin(3\theta)+\sin(\theta)\).
Answer
\(2\sin(2\theta)\cos(\theta)\)
Example \(\PageIndex{5}\): Evaluating Using the Sum-to-Product Formula
Evaluate \(\cos(15°)−\cos(75°)\). Check the answer with a graphing calculator.
Solution
We begin by writing the formula for the difference of cosines.
\[\begin{align*}
\cos \alpha-\cos \beta&= -2 \sin\left(\dfrac{\alpha+\beta}{2}\right) \sin\left(\dfrac{\alpha-\beta}{2}\right)\\[4pt] \text {Then we substitute the given angles and simplify.}\\[4pt] \cos(15^{\circ})-\cos(75^{\circ})&= -2\sin\left(\dfrac{15^{\circ}+75^{\circ}}{2}\right) \sin\left(\dfrac{15^{\circ}-75^{\circ}}{2}\right)\\[4pt] &= -2\sin(45^{\circ}) \sin(-30^{\circ})\\[4pt] &= -2\left(\dfrac{\sqrt{2}}{2}\right)\left(-\dfrac{1}{2}\right)\\[4pt] &= \dfrac{\sqrt{2}}{2} \end{align*}\]
Example \(\PageIndex{6}\): Proving an Identity
Prove the identity:
\[\dfrac{\cos(4t)−\cos(2t)}{\sin(4t)+\sin(2t)}=−\tan t\]
Solution
We will start with the left side, the more complicated side of the equation, and rewrite the expression until it matches the right side.
\[\begin{align*} \dfrac{\cos(4t)-\cos(2t)}{\sin(4t)+\sin(2t)}&= \dfrac{-2 \sin\left(\dfrac{4t+2t}{2}\right) \sin\left(\dfrac{4t-2t}{2}\right)}{2 \sin\left(\dfrac{4t+2t}{2}\right) \cos\left(\dfrac{4t-2t}{2}\right)}\\[4pt] &= \dfrac{-2 \sin(3t)\sin t}{2 \sin(3t)\cos t}\\[4pt] &= -\dfrac{\sin t}{\cos t}\\[4pt] &= -\tan t \end{align*}\]
Analysis
Recall that verifying trigonometric identities has its own set of rules. The procedures for solving an equation are not the same as the procedures for verifying an identity. When we prove an identity, we pick one side to work on and make substitutions until that side is transformed into the other side.
Example \(\PageIndex{7}\): Verifying the Identity Using Double-Angle Formulas and Reciprocal Identities
Verify the identity \({\csc}^2 \theta−2=\cos(2\theta)\sin2\theta\).
Solution
For verifying this equation, we are bringing together several of the identities. We will use the double-angle formula and the reciprocal identities. We will work with the right side of the equation and rewrite it until it matches the left side.
\[\begin{align*} \cos(2\theta)\sin2\theta&= \dfrac{1-2 {\sin}^2 \theta}{{\sin}^2 \theta}\\[4pt] &= \dfrac{1}{{\sin}^2 \theta}-\dfrac{2 {\sin}^2 \theta}{{\sin}^2 \theta}\\[4pt] &= {\csc}^2 \theta - 2 \end{align*}\]
Exercise \(\PageIndex{5}\)
Verify the identity \(\tan \theta \cot \theta−{\cos}^2 \theta={\sin}^2 \theta\).
Answer
\[\begin{align*} \tan \theta \cot \theta-{\cos}^2 \theta&= \left(\dfrac{\sin \theta}{\cos \theta}\right)\left(\dfrac{\cos \theta}{\sin \theta}\right)-{\cos}^2 \theta\\[4pt] &= 1-{\cos}^2 \theta\\[4pt] &= {\sin}^2 \theta \end{align*}\]
Key Equations Product-to-sum Formulas
\[\cos \alpha \cos \beta=\dfrac{1}{2}[\cos(\alpha−\beta)+\cos(\alpha+\beta)] \nonumber \]
\[\sin \alpha \cos \beta=\dfrac{1}{2}[\sin(\alpha+\beta)+\sin(\alpha−\beta)] \nonumber \]
\[\sin \alpha \sin \beta=\dfrac{1}{2}[\cos(\alpha−\beta)−\cos(\alpha+\beta)] \nonumber \]
\[\cos \alpha \sin \beta=\dfrac{1}{2}[\sin(\alpha+\beta)−\sin(\alpha−\beta)] \nonumber \]
Sum-to-product Formulas
\[\sin \alpha+\sin \beta=2\sin(\dfrac{\alpha+\beta}{2})\cos(\dfrac{\alpha−\beta}{2}) \nonumber \]
\[\sin \alpha-\sin \beta=2\sin(\dfrac{\alpha-\beta}{2})\cos(\dfrac{\alpha+\beta}{2}) \nonumber \]
\[\cos \alpha−\cos \beta=−2\sin(\dfrac{\alpha+\beta}{2})\sin(\dfrac{\alpha−\beta}{2}) \nonumber \]
\[\cos \alpha+\cos \beta=2\sin(\dfrac{\alpha+\beta}{2})\sin(\dfrac{\alpha−\beta}{2}) \nonumber \]
Key Concepts From the sum and difference identities, we can derive the product-to-sum formulas and the sum-to-product formulas for sine and cosine. We can use the product-to-sum formulas to rewrite products of sines, products of cosines, and products of sine and cosine as sums or differences of sines and cosines. See Example \(\PageIndex{1}\), Example \(\PageIndex{2}\), and Example \(\PageIndex{3}\). We can also derive the sum-to-product identities from the product-to-sum identities using substitution. We can use the sum-to-product formulas to rewrite sum or difference of sines, cosines, or products sine and cosine as products of sines and cosines. See Example \(\PageIndex{4}\). Trigonometric expressions are often simpler to evaluate using the formulas. See Example \(\PageIndex{5}\). The identities can be verified using other formulas or by converting the expressions to sines and cosines. To verify an identity, we choose the more complicated side of the equals sign and rewrite it until it is transformed into the other side. See Example \(\PageIndex{6}\) and Example \(\PageIndex{7}\). |
Let $X$ be any set. Say $\mathcal{E} \in 2^X $. Then there exists a unique smallest $\sigma-$algebra containing $\mathcal{E}$
Attempt:
Put $$ \mathcal{C} = \{ \mathcal{F} : \mathcal{F} \; \text{is a sigma algebra} \; \; and \; \; \mathcal{E} \subset \mathcal{F} \} $$
$\mathcal{C} $ is non-empty since the sigma algebra $2^X$ lives in there trivially. Next, write
$$ \mathcal{S} = \bigcap_{C \in \mathcal{C} } C $$
$\mathcal{S}$ is a sigma algebra since intersection of sigma algebras is a sigma algebra. Can someone help me to show why this set is the smallest unique sigma algebra? I am stuck. Thanks. |
Prove that if $\liminf \left|\frac{a_{n+1}}{a_n}\right|>1$, then the series $\sum a_n$ diverges.
What I did was: let $c\geq 1$
If $\liminf \left|\frac{a_{n+1}}{a_n}\right|>c \implies \exists n_0 \in \mathbb{N}$ such that |$a_{n+1}|>|a_{n}|c\quad \forall n>n_0$.
then $|a_{n+2}|>c \cdot |a_{n+1}|>c^2\cdot |a_{n}|$ and so $|a_{n+p}|>c^p\cdot |a_n|$.
That makes $$\sum_{n=0}^{\infty} |a_n|>\sum_{n=0}^{n_0} |a_n|+|a_{n_0}|\sum_{n=n_0}^{\infty}c^n$$ but $\sum c^n$ diverges since $c>0$, therefore $\sum |a_n|$ diverges.
The problem is, the theorem means $\sum a_n$ diverges, and this proof only shows that it doesn't converge
. It still could be conditionaly convergent. What can I do? absolutely |
Is the methodology to change the basis of a matrix the same as changing the basis of a vector? For example, if I had $A : \mathbb{R}^2 \to \mathbb{R}^2$ $$A=\begin{pmatrix} 3 & -5 \\ 2 & 7 \end{pmatrix}$$ in the standard basis and wanted it in the basis $v_1 = (1,3), v_2=(2,5)$. To do this, I simply multiply $A * \begin{pmatrix} 1 & 2 \\ 3 & 5 \end{pmatrix}$ to get $\begin{pmatrix} -12 & -19 \\ 23 & 39 \end{pmatrix}$? Is this correct?
Almost there, if you have a matrix $A$ with respect to standard basis and $D$ is the matrix of the transformation with respect to the basis, say $B=\lbrace v_1,v_2\rbrace \subset \mathbb{R}^2$ (Notice that $v_1$ and $v_2$ are L.I) then, after finding $C = \begin{pmatrix}1&2\\3&5\end{pmatrix}$ the change of basis matrix for basis B, you want to find $D$ in terms of $C$ and $A$ as follows
$$D\ [\vec{x}]_B = [T\ \vec{x} ]_B = [A\ \vec{x}]_B = C^{-1} A\ [\vec{x}] = C^{-1} A\ C\ [\vec{x}]_B$$
where $[\vec{x}] = C\ [\vec{x}]_B$ and $T \ \vec{x} = A \ \vec{x}$ the last one in standard coordenates.
Check this video to get more conclusions. |
I have found what I think is another bug in tikz-cd v0.9b. Take the following code:
\documentclass[11pt]{article}\usepackage{amsmath} %maths\usepackage{tikz-cd}\usetikzlibrary{arrows}\title{}\date{} \tikzset{ commutative diagrams/.cd, arrow style = tikz, diagrams={>=latex}}\begin{document}\begin{tikzpicture}[commutative diagrams/every diagram,column sep = 3em] \matrix (m) [matrix of math nodes, nodes in empty cells]{ |(Names)|N'&|(N)|N\\ |(T)|T&|(TTilde)|\widetilde{T}\\ }; \path [commutative diagrams/.cd, every arrow, every label] (Names) edge [commutative diagrams/hook] (N) edge node [left] {$\sigma$} (T) (N) edge [dotted] node {$\tilde{\sigma} \text{ for unique } \tilde{\sigma}$} (TTilde) (T) edge [commutative diagrams/hook] node [below] {$\eta_{T}$} (TTilde) ;\end{tikzpicture}\end{document}
With v 0.3c we get the following output:
However, with v 0.9b, the bottom monic arrow from T to Ttilde slants upwards slightly:
I guess the target anchor is being calculated differently, something to do with the extra height of the target object? |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
Yes, it's possible to have an infinite chain.
I'm sure you're already familiar with some examples:$$ O(x) \subseteq O(x^2) \subseteq \ldots \subseteq O(x^{42}) \subseteq \ldots$$You have an infinite chain here: polynomials of growing degree. Can you go further? Sure! An exponential grows faster (asymptotically speaking) than any polynomial.$$ O(x) \subseteq O(x^2) \subseteq \ldots \subseteq O(x^{42}) \subseteq \ldots O(e^x)$$And of course you can keep going: $O(\mathrm{e}^x) \subseteq O(x\,\mathrm{e}^x) \subseteq O(\mathrm{e}^{2x}) \subseteq O(\mathrm{e}^{\mathrm{e}^x}) \subseteq \ldots$
You can build an infinite chain in the other direction too. If $f = O(g)$ then $\dfrac{1}{g} = O\left(\dfrac{1}{f}\right)$ (sticking to positive functions, since around here we discuss asymptotics of complexity functions). So we have for example:
$$ O(x) \subseteq O(x^2) \subseteq \ldots \subseteq O\left(\dfrac{e^x}{x^2}\right) \subseteq O\left(\dfrac{e^x}{x}\right) \subseteq O(e^x)$$
In fact, given any chain of functions $f_1, \ldots, f_n$, you can build a function $f_\infty$ that grows faster than all of them. (I assume the $f_i$'s are functions from $\mathbb{N}$ to $\mathbb{R}_+$.) First, start with the idea $f_\infty(x) = \max \{f_n(x) \mid n \in\mathbb{N}\}$. That may not work because the set $\{f_n(x) \mid n \in\mathbb{N}\}$ can be unbounded. But since we're only intersted in asymptotic growth, it's enough to start small and grow progressively. Take the maximum over a
finite number of functions.$$f_\infty(x) = \max \{f_n(x) \mid 1 \le n \le N \} \qquad \text{if \(N \le x \lt N+1\)}$$Then for any $N$, $f_N \in O(f_\infty)$, since $\forall x \ge N, f_\infty(x) \ge f_N(x)$. If you want a function that grows strictly faster ($f_\infty = o(f_\infty')$), take $f_\infty'(x) = x \cdot (1 + f_\infty(x))$. |
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to .
To send content items to your Kindle, first ensure [email protected] added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This accessible text covers key results in functional analysis that are essential for further study in the calculus of variations, analysis, dynamical systems, and the theory of partial differential equations. The treatment of Hilbert spaces covers the topics required to prove the Hilbert–Schmidt theorem, including orthonormal bases, the Riesz representation theorem, and the basics of spectral theory. The material on Banach spaces and their duals includes the Hahn–Banach theorem, the Krein–Milman theorem, and results based on the Baire category theorem, before culminating in a proof of sequential weak compactness in reflexive spaces. Arguments are presented in detail, and more than 200 fully-worked exercises are included to provide practice applying techniques and ideas beyond the major theorems. Familiarity with the basic theory of vector spaces and point-set topology is assumed, but knowledge of measure theory is not required, making this book ideal for upper undergraduate-level and beginning graduate-level courses.
This contribution covers the topic of my talk at the 2016-17 Warwick-EPSRC Symposium: 'PDEs and their applications'. As such it contains some already classical material and some new observations. The main purpose is to compare several avatars of the Kato criterion for the convergence of a Navier-Stokes solution, to a regular solution of the Euler equations, with numerical or physical issues like the presence (or absence) of anomalous energy dissipation, the Kolmogorov 1/3 law or the Onsager C^{0,1/3} conjecture. Comparison with results obtained after September 2016 and an extended list of references have also been added.
Regularity criteria for solutions of the three-dimensional Navier-Stokes equations are derived in this paper. Let$$\Omega(t, q) := \left\{x:|u(x,t)| > C(t,q)\normVT{u}_{L^{3q-6}(\mathbb{R}^3)}\right\} \cap\left\{x:\widehat{u}\cdot\nabla|u|\neq0\right\}, \tilde\Omega(t,q) := \left\{x:|u(x,t)| \le C(t,q)\normVT{u}_{L^{3q-6}(\mathbb{R}^3)}\right\} \cap\left\{x:\widehat{u}\cdot\nabla|u|\neq0\right\},$$where$$q\ge3$$and$$C(t,q) := \left(\frac{\normVT{u}_{L^4(\mathbb{R}^3)}^2\normVT{|u|^{(q-2)/2}\,\nabla|u|}_{L^2(\mathbb{R}^3)}}{cq\normVT{u_0}_{L^2(\mathbb{R}^3)} \normVT{p+\mathcal{P}}_{L^2(\tilde\Omega)}\normVT{|u|^{(q-2)/2}\, \widehat{u}\cdot\nabla|u|}_{L^2(\tilde\Omega)}}\right)^{2/(q-2)}.$$Here$$u_0=u(x,0)$$,$$\mathcal{P}(x,|u|,t)$$is a pressure moderator of relatively broad form,$$\widehat{u}\cdot\nabla|u|$$is the gradient of$$|u|$$along streamlines, and$$c=(2/\pi)^{2/3}/\sqrt{3}$$is the constant in the inequality$$\normVT{f}_{L^6(\mathbb{R}^3)}\le c\normVT{\nabla f}_{L^2(\mathbb{R}^3)}$$.
We address the decay and the quantitative uniqueness properties for solutions of the elliptic equation with a gradient term,$$\Delta u=W\cdot \nabla u$$. We prove that there exists a solution in a complement of the unit ball which satisfies$$|u(x)|\le C\exp (-C^{-1}|x|^2)$$where$$W$$is a certain function bounded by a constant. Next, we revisit the quantitative uniqueness for the equation$$-\Delta u= W \cdot \nabla u$$and provide an example of a solution vanishing at a point with the rate$${\rm const}\Vert W\Vert_{L^\infty}^2$$. We also review decay and vanishing results for the equation$$\Delta u= V u$$.
We give a survey of recent results on weak-strong uniqueness for compressible and incompressible Euler and Navier-Stokes equations, and also make some new observations. The importance of the weak-strong uniqueness principle stems, on the one hand, from the instances of nonuniqueness for the Euler equations exhibited in the past years; and on the other hand from the question of convergence of singular limits, for which weak-strong uniqueness represents an elegant tool.
We investigate existence, uniqueness and regularity of time-periodic solutions to the Navier-Stokes equations governing the flow of a viscous liquid past a three-dimensional body moving with a time-periodic translational velocity. The net motion of the body over a full time-period is assumed to be non-zero. In this case, the appropriate linearization is the time-periodic Oseen system in a three-dimensional exterior domain. A priori L^q estimates are established for this linearization. Based on these "maximal regularity" estimates, existence and uniqueness of smooth solutions to the fully nonlinear Navier-Stokes problem is obtained by the contraction mapping principle.
In this contribution we focus on a few results regarding the study of the three-dimensional Navier-Stokes equations with the use of vector potentials. These dependent variables are critical in the sense that they are scale invariant. By surveying recent results utilising criticality of various norms, we emphasise the advantages of working with scale-invariant variables. The Navier-Stokes equations, which are invariant under static scaling transforms, are not invariant under dynamic scaling transforms. Using the vector potential, we introduce scale invariance in a weaker form, that is, invariance under dynamic scaling modulo a martingale (Maruyama-Girsanov density) when the equations are cast into Wiener path-integrals. We discuss the implications of this quasi-invariance for the basic issues of the Navier-Stokes equations.
This article offers a modern perspective that exposes the many contributions of Leray in his celebrated work on the three-dimensional incompressible Navier-Stokes equations from 1934. Although the importance of his work is widely acknowledged, the precise contents of his paper are perhaps less well known. The purpose of this article is to fill this gap. We follow Leray's results in detail: we prove local existence of strong solutions starting from divergence-free initial data that is either smooth or belongs to$$H^1$$or$$L^2 \cap L^p$$(with$$p \in (3,\infty]$$), as well as lower bounds on the norms$$\| \nabla u (t) \|_2$$and$$\| u(t) \|_p$$($$p\in(3,\infty]$$) as t approaches a putative blow-up time. We show global existence of a weak solution and weak-strong uniqueness. We present Leray's characterisation of the set of singular times for the weak solution, from which we deduce that its upper box-counting dimension is at most 1/2. Throughout the text we provide additional details and clarifications for the modern reader and we expand on all ideas left implicit in the original work, some of which we have not found in the literature. We use some modern mathematical tools to bypass some technical details in Leray's work, and thus expose the elegance of his approach.
By their use of mild solutions, Fujita-Kato and later on Giga-Miyakawa opened the way to solving the initial-boundary value problem for the Navier-Stokes equations with the help of the contracting mapping principle in suitable Banach spaces, on any smoothly bounded domain$$\Omega \subset \R^n, n \ge 2$$, globally in time in case of sufficiently small data. We will consider a variant of these classical approximation schemes: by iterative solution of linear singular Volterra integral equations, on any compact time interval J, again we find the existence of a unique mild Navier-Stokes solution under smallness conditions, but moreover we get the stability of each (possibly large) mild solution, inside a scale of Banach spaces which are imbedded in some$$C^0 (J, L^r (\Omega))$$,$$1 < r < \infty$$.
This paper reviews and summarizes two recent pieces of work on the Rayleigh-Taylor instability. The first concerns the 3D Cahn-Hilliard-Navier-Stokes (CHNS) equations and the BKM-type theorem proved by Gibbon, Pal, Gupta, & Pandit (2016). The second and more substantial topic concerns the variable density model, which is a buoyancy-driven turbulent flow considered by Cook & Dimotakis (2001) and Livescu & Ristorcelli (2007, 2008). In this model $\rho^* (x, t)$ is the composition density of a mixture of two incompressible miscible fluids with fluid densities$$\rho^*_2 > \rho^*_1$$and$$\rho^*_0$$is a reference normalisation density. Following the work of a previous paper (Rao, Caulfield, & Gibbon, 2017), which used the variable$$\theta = \ln \rho^*/\rho^*_0$$, data from the publicly available Johns Hopkins Turbulence Database suggests that the L2-spatial average of the density gradient$$\nabla \theta$$can reach extremely large values at intermediate times, even in flows with low Atwood number At =$$(\rho^*_2 - \rho^*_1)/(\rho^*_2 + \rho^*_1) = 0.05$$. This implies that very strong mixing of the density field at small scales can potentially arise in buoyancy-driven turbulence thus raising the possibility that the density gradient$$\nabla \theta$$might blow up in a finite time.
The aim of this paper is to prove energy conservation for the incompressible Euler equations in a domain with boundary. We work in the domain$$\TT^2\times\R_+$$, where the boundary is both flat and has finite measure; in this geometry we do not require any estimates on the pressure, unlike the proof in general bounded domains due to Bardos & Titi (2018). However, first we study the equations on domains without boundary (the whole space$$\R^3$$, the torus$$\mathbb{T}^3$$, and the hybrid space$$\TT^2\times\R$$). We make use of some arguments due to Duchon & Robert (2000) to prove energy conservation under the assumption that$$u\in L^3(0,T;L^3(\R^3))$$and$${|y|\to 0}\frac{1}{|y|}\int^T_0\int_{\R^3} |u(x+y)-u(x)|^3\,\d x\,\d t=0$$or$$\int_0^T\int_{\R^3}\int_{\R^3}\frac{|u(x)-u(y)|^3}{|x-y|^{4+\delta}}\,\d x\,\d y\,\d t<\infty,\qquad\delta>0$$, the second of which is equivalent to$$u\in L^3(0,T;W^{\alpha,3}(\R^3))$$,$$\alpha>1/3$$.
The Euler and Navier–Stokes equations are the fundamental mathematical models of fluid mechanics, and their study remains central in the modern theory of partial differential equations. This volume of articles, derived from the workshop 'PDEs in Fluid Mechanics' held at the University of Warwick in 2016, serves to consolidate, survey and further advance research in this area. It contains reviews of recent progress and classical results, as well as cutting-edge research articles. Topics include Onsager's conjecture for energy conservation in the Euler equations, weak-strong uniqueness in fluid models and several chapters address the Navier–Stokes equations directly; in particular, a retelling of Leray's formative 1934 paper in modern mathematical language. The book also covers more general PDE methods with applications in fluid mechanics and beyond. This collection will serve as a helpful overview of current research for graduate students new to the area and for more established researchers.
In a previous paper, we presented results from a 12-week study of a Psychomotor DANCe Therapy INtervention (DANCIN) based on Danzón Latin Ballroom that involves motor, emotional-affective, and cognitive domains, using a multiple-baseline single-case design in three care homes. This paper reports the results of a complementary process evaluation to elicit the attitudes and beliefs of home care staff, participating residents, and family members with the aim of refining the content of DANCIN in dementia care.
Methods:
An external researcher collected bespoke questionnaires from ten participating residents, 32 care home staff, and three participants’ family members who provided impromptu feedback in one of the care homes. The Behavior Change Technique Taxonomy v1 (BCTTv1) provided a methodological tool for identifying active components of the DANCIN approach warranting further exploration, development, and implementation.
Results:
Ten residents found DANCIN beneficial in terms of mood and socialization in the care home. Overall, 78% of the staff thought DANCIN led to improvements in residents’ mood; 75% agreed that there were improvements in behavior; 56% reported increased job satisfaction; 78% of staff were enthusiastic about receiving further training. Based on participants’ responses, four BCTTv1 labels–Social support (emotional), Focus on past success and verbal persuasion to boost self-efficacy, Restructuring the social environment and Habit formation–were identified to describe the intervention. Residents and staff recommended including additional musical genres and extending the session length. Discussions of implementing a supervision system to sustain DANCIN regularly regardless of management or staff turnover were suggested.
Conclusions:
Care home residents with mild to moderate dementia wanted to continue DANCIN as part of their routine care and staff and family members were largely supportive of this approach. This study argues in favor of further dissemination of DANCIN in care homes. We provide recommendations for the future development of DANCIN based on the views of key stakeholder groups. |
theoratically its possible to attach the inductor with a voltage
source
Yes, in the context of ideal circuit theory, it is possible to do so without contradiction.
Let the voltage source have constant voltage across $V_S \gt 0$ and the inductor have inductance $L$. The inductor is connected to the voltage source at time $t = 0$. By KVL, the voltage across the inductor is given by
$$v_L(t) = V_S\, u(t)$$
where $u(t)$ is the unit step function.
The circuit current is described by a very simple differential equation:
$$V_S\,u(t) = L \frac{di}{dt}$$
with solution
$$i(t) = \frac{V_S}{L}\, t\, u(t)$$
In other words, the current is zero for $t \le 0$ and increases at a constant rate for $t \ge 0$.
Note that the current is unbounded (does not reach a limiting value) as $t \rightarrow \infty$ which is clearly unphysical. For a physical circuit, the internal resistance of the voltage source and/or the inductor will limit the current. |
Okay so we all know that one planet from the movie "Interstellar" that orbited the black hole Gargantua. How do I NOT get that? What's the minimum safe distance for a planet to be from a black hole ("safe" as in no funky relativistic time dilation and no megatsunamis)?
A black hole does not have any magic properties, it does not "radiate" time dilation or any other nonsense like that. Noticable time dilation happens when one observer moves at relativistic speeds with reference to another, or when one observer is under much higher gravitational acceleration than another.
In the movie "Interstellar", the crew of a spaceship landed on a planet that orbited a supermassive black hole. Being on the surface of that planet is said to cause a time dilation of 7 years on Earth for each hour on the planet surface, in other words a Lorentz factor of > 61,000.
The scientific problem with that is that the numbers don't nearly add up. The planet itself has 1.3g surface gravity. That causes a negligible time dilation compared to Earth.
That only leaves relativistic speed compared to Earth as the source of time dilation. For a Lorentz factor of 61,000 and ignoring the effect of the planet's surface gravity, you would need a relative speed of 99.999999987% of c, the speed of light. That is how fast the planet would have to orbit around its black hole in order to cause the time dilation presented in the story, and is obviously impossible. Even if you had a planet rotating that quickly, in order to land on it, you would have to match its orbit, i.e. you would at least have to accelerate your spaceship to the same speed.
The movie got a lot of things right, and even caused some scientific papers to be written, but that bit about time dilation, while necessary for the story, was the worst kind of nonsense: Scientifically sound, but the numbers were way, way off.
Orbiting a black hole and you don't want extreme tidal forces? This depends on several things, including:
The mass of the black hole; the Interstellar one looks like it's in the largest category, a supermassive black hole, because you can actually see the the black hole is muchlarger than a planet in front of it. This puts in anywhere from $10^{5}$ to $10^{10}$ solar masses. (If it was about the same size as the planet, it would still weigh 1000 ($10^3$) times as heavy as our sun!) The mass of the planet in question is pretty unknown, but I'm assuming earth-like mass. ($~5.7 * 10^{24} kg$)
Now, let's talk about tidal forces. Wikipedia goes through some derivations of the tidal force, but eventually ends up with
$$\vec{a}_g \approx \hat{r}*2 \Delta r* G*\frac{M_{bh}}{R_{bh}^3} $$
with:
$a$ being the acceleration of the tidal force $G$ being the universal gravitational constant $\Delta r$ being the distance from the planet's center to the the surface $M_{bh}$ is the mass of the black hole $R_{bh}$ is the orbital radius of the planet to the black hole $\hat{r}$ just lets us know the force is in the direction of the black hole.
but wait, this gets easier! We only want the orbital radius, and Earth-like tidal forces on an Earth-sized planet. That means we want to solve the following for $R_{bh}$
$$\frac{M_{bh}}{R_{bh}^3} = \frac{M_{sun}}{R_{earth}^3}$$
Just looking at this, I can tell you that $R_{bh}^3$ needs to scale with the size of your black hole, producing an extra $10^5$ times $R_{earth}^3$. $\sqrt[3]{10^5} \approx 46$, so the planet is about 46 AU out from the middle of the black hole.
Orbiting a black hole is no more or less dangerous than orbiting a small sun: You get too close, bad things happen. If your orbit decays without correction, you are going to be obliterated.
As long as you properly insert yourself into an orbit at a safe distance at the appropriate velocity, there is no particular danger from a black hole.
And leaving is no special challenge either. By being in orbit you are already at escape velocity, so you don't need any particularly special 'oomph' to get back home. [EDIT: Sorry, I was wrong here. Your speed is about 0.7 of what you would need for escape, but it's in the ball-park!]
Now a
planet close to a black hole might be a more complex story.
But don't believe the hype of 80's SciFi - You won't inevitably have to hold the rails of your starship as your head gets stretched. Unless of course, you get too close...
There was an episode of Battlestar Galactica where the Cylons were orbiting a black hole and my brief reaction was "How ridiculous!", but then I thought about it, and concluded "Why not!?!".
I will trust @PipperChip on the math!
Yes, orbiting a two solar-mass black home should be no different to orbiting a two solar-mass star. Stay at a sensible planetary distance and you will be OK ...
... unless or until something falls into the back hole. The process of its being chewed up and eaten will convert a good fraction of its mass into energy, much of it gamma rays. Most of that energy will be tightly beamed more or less perpendicular to the plane of the accretion disk into which that object will be transformed. So an orbit at right angles to the disk would be especially unhealthy.
If the black hole has "hoovered" its stellar neighbourhood many aeons ago and there's no active accretion disk when you arrive, you are probably safe.
Edit ... should have said, something
large falls into it. The sun converts 5 million tons of mass into energy per second(!), most of which comes out as heat and light rather than gamma rays. But at a planetary distance I don't think a few tons of rock being converted into gamma rays will be lethal. |
Almost-periodic analytic function
An analytic function $f(s)$, $s=\sigma+i\tau$, regular in a strip $-\infty\leqslant\alpha<\sigma<\beta\leqslant+\infty$, and expandable into a series \begin{equation} \sum a_ne^{i\lambda_ns}, \end{equation}
where the $a_n$ are complex and the $\lambda_n$ are real numbers. A real number $\tau$ is called an $\varepsilon$-almost-period of $f(s)$ if for all points of the strip $(\alpha, \beta)$ the inequality
\begin{equation} |f(s+i\tau) - f(s)|<\varepsilon \end{equation}
holds. An almost-periodic analytic function is an analytic function that is regular in a strip $(\alpha, \beta)$ and possesses a relatively-dense set of $\varepsilon$-almost-periods for every $\varepsilon>0$. An almost-periodic analytic function on a closed strip $\alpha\leqslant\sigma\leqslant\beta$ is defined similarly. An almost-periodic analytic function on a strip $[\alpha, \beta]$ is a uniformly almost-periodic function of the real variable $\tau$ on every straight line in the strip and it is bounded in $[\alpha, \beta]$, i.e. on any interior strip. If a function $f(s)$, regular in a strip $(\alpha, \beta)$, is a uniformly almost-periodic function on at least one line $\sigma=\sigma_0$ in the strip, then boundedness of $f(s)$ in $[\alpha, \beta]$ implies its almost-periodicity on the entire strip $[\alpha, \beta]$. Consequently, the theory of almost-periodic analytic functions turns out to be a theory analogous to that of almost-periodic functions of a real variable (cf. almost-periodic function). Therefore, many important results of the latter theory can be easily carried over to almost-periodic analytic functio ns: the uniqueness theorem, Parseval's equality, rules of operation with Dirichlet series, the approximation theorem, and several other theorems.
References
[1] H. Bohr, "Almost-periodic functions" , Chelsea, reprint (1947) (Translated from German) [2] B.M. Levitan, "Almost-periodic functions" , Moscow (1953) pp. Chapt. 7 (In Russian) Comments
The hyphen between almost and periodic is sometimes dropped.
References
[a1] C. Corduneanu, "Almost periodic functions" , Interscience (1961) pp. Chapt. 3 How to Cite This Entry:
Almost-periodic analytic function.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Almost-periodic_analytic_function&oldid=30620 |
Preparing NOJ
Mishka got an integer array $$$a$$$ of length $$$n$$$ as a birthday present (what a surprise!).
Mishka doesn't like this present and wants to change it somehow. He has invented an algorithm and called it "Mishka's Adjacent Replacements Algorithm". This algorithm can be represented as a sequence of steps:
Note that the dots in the middle of this algorithm mean that Mishka applies these replacements for each pair of adjacent integers ($$$2i - 1, 2i$$$) for each $$$i \in\{1, 2, \ldots, 5 \cdot 10^8\}$$$ as described above.
For example, for the array $$$a = [1, 2, 4, 5, 10]$$$, the following sequence of arrays represents the algorithm:
$$$[1, 2, 4, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$1$$$ with $$$2$$$) $$$\rightarrow$$$ $$$[2, 2, 4, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$2$$$ with $$$1$$$) $$$\rightarrow$$$ $$$[1, 1, 4, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$3$$$ with $$$4$$$) $$$\rightarrow$$$ $$$[1, 1, 4, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$4$$$ with $$$3$$$) $$$\rightarrow$$$ $$$[1, 1, 3, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$5$$$ with $$$6$$$) $$$\rightarrow$$$ $$$[1, 1, 3, 6, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$6$$$ with $$$5$$$) $$$\rightarrow$$$ $$$[1, 1, 3, 5, 10]$$$ $$$\rightarrow$$$ $$$\dots$$$ $$$\rightarrow$$$ $$$[1, 1, 3, 5, 10]$$$ $$$\rightarrow$$$ (replace all occurrences of $$$10$$$ with $$$9$$$) $$$\rightarrow$$$ $$$[1, 1, 3, 5, 9]$$$. The later steps of the algorithm do not change the array.
Mishka is very lazy and he doesn't want to apply these changes by himself. But he is very interested in their result. Help him find it.
The first line of the input contains one integer number $$$n$$$ ($$$1 \le n \le 1000$$$) — the number of elements in Mishka's birthday present (surprisingly, an array).
The second line of the input contains $$$n$$$ integers $$$a_1, a_2, \dots, a_n$$$ ($$$1 \le a_i \le 10^9$$$) — the elements of the array.
Print $$$n$$$ integers — $$$b_1, b_2, \dots, b_n$$$, where $$$b_i$$$ is the final value of the $$$i$$$-th element of the array after applying "Mishka's Adjacent Replacements Algorithm" to the array $$$a$$$. Note that you cannot change the order of elements in the array.
5 1 2 4 5 10 1 1 3 5 9 10 10000 10 50605065 1 5 89 5 999999999 60506056 1000000000 9999 9 50605065 1 5 89 5 999999999 60506055 999999999
The first example is described in the problem statement.
Info
Provider CodeForces
Code CF1006A
Tags
Submitted 103
Passed 76
AC Rate 73.79%
Date 03/04/2019 16:29:44
Related |
Consider a game in which darts are thrown at a board. The board is formed by $10$ circles with radii $20$, $40$, $60$, $80$, $100$, $120$, $140$, $160$, $180$, and $200$ (measured in millimeters), centered at the origin. Each throw is evaluated depending on where the dart hits the board. The score is $p$ points ($p \in \{ 1, 2, \ldots , 10\} $) if the smallest circle enclosing or passing through the hit point is the one with radius $20 \cdot (11 - p)$. No points are awarded for a throw that misses the largest circle. Your task is to compute the total score of a series of $n$ throws.
The first line of the input contains the number of test cases $T$, where $1 \le T \le 10\, 000$. The descriptions of the test cases follow:
Each test case starts with a line containing the number of throws $n$ ($1 \leq n \leq 10^6$). Each of the next $n$ lines contains two integers $x$ and $y$ ($-200 \leq x, y \leq 200$) separated by a space—the coordinates of the point hit by a throw. The sum of $n$ across all $T$ test cases is at most $2^{21}$.
Print the answers to the test cases in the order in which they appear in the input. For each test case print a single line containing one integer—the sum of the scores of all $n$ throws.
Sample Input 1 Sample Output 1 1 5 32 -39 71 89 -60 80 0 0 196 89 29 |
Let us have two random variables $A$ and $B$ representing lifetimes of two elements of a system, where $A$ has cdf $F_A(x)$, $A \sim Exp(\lambda_1 + \lambda_{12})$ and $B$ has cdf $F_B(y)$, $B \sim Exp(\lambda_2 + \lambda_{12})$, with joint cdf $H(x,y)$ .
Marshall-Olkin copula is defined as survival copula
$$\bar{H}(x,y) = C_{\theta_A, \theta_B}(u,v) = \min (u^{1-\theta_A}v, uv^{1-\theta_B})$$
where $$\theta_A = \frac{\lambda_{12}}{\lambda_1+\lambda_{12}} \text{ and } \theta_B = \frac{\lambda_{12}}{\lambda_2+\lambda_{12}}$$
$$u = \bar{F_A}(x) = 1 - F_A(x) \text{ and }v = \bar{F_B}(y) = 1- F_B(y)$$
What is the formula for non-survival Marshall-Olkin copula that would have $u = F_A(x)$ and $v=F_B(y)$? |
If the sets are chosen randomly, then for the parameters you chose, the problem can be solved efficiently. In particular, the following trivial algorithm will output the correct answer with high probability: take each set of $S$, pad it by adding 10 random elements, and output the result (all sets of $S$, padded).
This might sound wasteful, but for the parameters you mentioned, if the sets are chosen randomly, it's unlikely you can do better. You can only do better if there exist two sets $s,t \in S$ that can be covered by a 25-element set, i.e., such that $|s \cup t| \le 25$. This happens only if $s,t$ have at least 5 elements in common (i.e., $|s \cap t| \ge 5$).
Now when $s,t$ are chosen uniformly at random from all possible sets of size 15, the probability that they have two elements at random is very small. It is approximately ${15 \choose 5}^2/1000^5 \approx 9 \times 10^{-9}$. Also, there are ${1000 \choose 2}^2$ possible pairs $s,t$, so by a union bound, the probability that there exist two such sets $s,t$ is about $0.0045$. This means there is only a small probability that you can cover two sets from $S$ by a single set of size 25, if the sets of $S$ are chosen randomly.
In general, for your problem, one approach would be to find all pairs of sets $s,t \in S$ that overlap in at least 5 elements. Build a graph where $S$ is the vertex set and you add an edge between $s,t$ if $|s \cap t| \ge 5$. Find a maximum matching in this graph. Then you can use this to build a set of 25-size sets (one 25-size set per edge in the matching, plus one set per vertex not touched by the matching). However, if the sets are generated randomly, this is unnecessary as it is unlikely to find any pair of sets that overlap in at least 5 elements.
Anyway, if the sets are chosen uniformly at random, the problem is uninteresting for the parameters you gave. If the sets aren't chosen uniformly at random and have some structure, I recommend that you edit the question to describe this structure. |
I'm trying to understand BRST complex in its Lagrangian incarnation i.e. in the form mostly closed to original Faddeev-Popov formulation. It looks like the most important part of that construction (proof of vanishing of higher cohomology groups) is very hard to find in the literature, at least I was not able to do so. Let me formulate couple of questions on BRST, but in the form of exercises on Lie algebra cohomology.
Let $X$ be a smooth affine variety, and $g$ is a (reductive?) Lie algebra acting on $X$, I think we assume $g$ to be at least unimodular, otherwise BRST construction won't work, and also assume that map $g \to T_X$ is injective. In physics language this is closed and irreducible action of a Lie algebra of a gauge group of the space of fields $X$. Structure sheaf $\mathcal{O}_X$ is a module over $g$, and I could form Chevalley-Eilenberg complex with coefficients in this module$$C=\wedge g^* \otimes \mathcal{O}_X.$$
The ultimate goal if BRST construction is to provide "free model" of algebra of invarinats $\mathcal{O}_X^g$, it is nor clear what is "free model", but I think BRST construction is just Tate's procedure of killing cycles for Chevalley-Eilenberg complex above (Tate's construction works for any dg algebra, and $C$ is a dg algebra).
My first question is what exactly are cohomology of the complex $C$? In other words before killing cohomology I'd like to understand what exactly have to be killed. For me it looks like a classical question on Lie algebra cohomology and, perhaps, it was discussed in the literature 60 years ago.
It is not necessary to calculate these cohomology groups and then follow Tate's approach to construct complete BSRT complex (complete means I added anti-ghosts and lagrange multipliers to $C$ and modified the differential), but even if I start with BRST complex$$C_{BRST}=(\mathcal{O}_X \otimes \wedge (g \oplus g^*) \otimes S(g), d_{BRST}=d_{CE}+d_1),$$where I could find a proof that all higher cohomology vanishes? This post imported from StackExchange MathOverflow at 2014-08-15 09:41 (UCT), posted by SE-user Sasha Pavlov |
Question: Why there is NO Charge-Parity (CP) violation from a potential Theta term in the electroweak SU(2)$_{weak,flavor}$ sector by $\theta_{electroweak} \int F \wedge F$?
(ps. an explicit calculation is required.)
Background:
We know for a non-Abelian gauge theory, the $F \wedge F $ term is nontrivial and breaks $CP$ symmetry (thus break $T$ symmetry by $CPT$ theorem), which is this term:$$\int F \wedge F$$with a field strength $F=dA+A\wedge A$.
$\bullet$
SU(3)$_{strong,color}$ QCD:
To describe strong interactions of gluons (which couple quarks), we use QCD with gauge fields of non-Abelian SU(3)$_{color}$ symmetry. This extra term in the QCD Lagrangian:$$\theta_{QCD} \int G \wedge G =\theta_{QCD} \int d^4x G_{\mu\nu}^a \wedge \tilde{G}^{\mu\nu,a} $$which any nonzero $\theta_{QCD}$ breaks $CP$ symmetry. (p.s. and there we have the strong CP problem).
$\bullet$
Compare the strong interactions $\theta_{QCD,strong}$ to U(1)$_{em}$ $\theta_{QED}$: For U(1) electromagnetism, even if we have $\theta_{QED} \int F \wedge F$, we can rotate this term and absorb this into the fermion (which couple to U(1)$_{em}$) masses(?). For SU(3) QCD, unlike U(1) electromagnetism, if the quarks are not massless, this term of $\theta_{QCD}$ cannot be rotated away(?) as a trivial $\theta_{QCD}=0$.
$\bullet$
SU(2)$_{weak,flavor}$ electro-weak:
To describe electroweak interactions, we again have gauge fields of non-Abelian SU(2)$_{weak,flavor}$symmetry. Potentially this extra term in the electroweak Lagrangian can break $CP$ symmetry (thus break $T$ symmetry by $CPT$ theorem):$$\theta_{electroweak} \int F \wedge F =\theta_{electroweak} \int d^4x F_{\mu\nu}^a \wedge \tilde{F}^{\mu\nu,a} $$here the three components gauge fields $A$ under SU(2) are: ($W^{1}$,$W^{2}$,$W^{3}$) or ($W^{+}$,$W^{-}$,$Z^{0}$) of W and Z bosons.
This post imported from StackExchange Physics at 2014-06-04 11:35 (UCT), posted by SE-user Idear
Question [again as the beginning]: We have only heard of CKM matrix in the weak SU(2) sector to break $CP$ symmetry. Why there is NO CP violation from a potential Theta term of an electroweak SU(2)$_{weak,flavor}$ sector $\theta_{electroweak} \int F \wedge F$? Hint: In other words, how should we rotate the $\theta_{electroweak}$ to be trivial $\theta_{electroweak}=0$? ps. I foresee a reason already, but I wish an explicit calculation is carried out. Thanks a lot! |
Inhomogeneous K-function
Estimates the inhomogeneous \(K\) function of a non-stationary point pattern.
Usage
Kinhom(X, lambda=NULL, …, r = NULL, breaks = NULL, correction=c("border", "bord.modif", "isotropic", "translate"), renormalise=TRUE, normpower=1, update=TRUE, leaveoneout=TRUE, nlarge = 1000, lambda2=NULL, reciplambda=NULL, reciplambda2=NULL, diagonal=TRUE, sigma=NULL, varcov=NULL, ratio=FALSE)
Arguments X
The observed data point pattern, from which an estimate of the inhomogeneous \(K\) function will be computed. An object of class
"ppp"or in a format recognised by
as.ppp()
lambda
Optional. Values of the estimated intensity function. Either a vector giving the intensity values at the points of the pattern
X, a pixel image (object of class
"im") giving the intensity values at all locations, a fitted point process model (object of class
"ppm"or
"kppm") or a
function(x,y)which can be evaluated to give the intensity value at any location.
…
Extra arguments. Ignored if
lambdais present. Passed to
density.pppif
lambdais omitted.
r
vector of values for the argument \(r\) at which the inhomogeneous \(K\) function should be evaluated. Not normally given by the user; there is a sensible default.
breaks
This argument is for internal use only.
correction
A character vector containing any selection of the options
"border",
"bord.modif",
"isotropic",
"Ripley",
"translate",
"translation",
"none"or
"best". It specifies the edge correction(s) to be applied. Alternatively
correction="all"selects all options.
renormalise
Logical. Whether to renormalise the estimate. See Details.
normpower
Integer (usually either 1 or 2). Normalisation power. See Details.
update
Logical value indicating what to do when
lambdais a fitted model (class
"ppm",
"kppm"or
"dppm"). If
update=TRUE(the default), the model will first be refitted to the data
X(using
update.ppmor
update.kppm) before the fitted intensity is computed. If
update=FALSE, the fitted intensity of the model will be computed without re-fitting it to
X.
leaveoneout nlarge
Optional. Efficiency threshold. If the number of points exceeds
nlarge, then only the border correction will be computed, using a fast algorithm.
lambda2
Advanced use only. Matrix containing estimates of the products \(\lambda(x_i)\lambda(x_j)\) of the intensities at each pair of data points \(x_i\) and \(x_j\).
reciplambda
Alternative to
lambda. Values of the estimated
reciprocal\(1/\lambda\) of the intensity function. Either a vector giving the reciprocal intensity values at the points of the pattern
X, a pixel image (object of class
"im") giving the reciprocal intensity values at all locations, or a
function(x,y)which can be evaluated to give the reciprocal intensity value at any location.
reciplambda2
Advanced use only. Alternative to
lambda2. A matrix giving values of the estimated
reciprocal products\(1/\lambda(x_i)\lambda(x_j)\) of the intensities at each pair of data points \(x_i\) and \(x_j\). diagonal
Do not use this argument.
sigma,varcov
Optional arguments passed to
density.pppto control the smoothing bandwidth, when
lambdais estimated by kernel smoothing.
ratio
Logical. If
TRUE, the numerator and denominator of each edge-corrected estimate will also be saved, for use in analysing replicated point patterns.
Details
This computes a generalisation of the \(K\) function for inhomogeneous point patterns, proposed by Baddeley, Moller and Waagepetersen (2000).
The ``ordinary'' \(K\) function (variously known as the reduced second order moment function and Ripley's \(K\) function), is described under
Kest. It is defined only for stationary point processes.
The inhomogeneous \(K\) function \(K_{\mbox{\scriptsize\rm inhom}}(r)\) is a direct generalisation to nonstationary point processes. Suppose \(x\) is a point process with non-constant intensity \(\lambda(u)\) at each location \(u\). Define \(K_{\mbox{\scriptsize\rm inhom}}(r)\) to be the expected value, given that \(u\) is a point of \(x\), of the sum of all terms \(1/\lambda(x_j)\) over all points \(x_j\) in the process separated from \(u\) by a distance less than \(r\). This reduces to the ordinary \(K\) function if \(\lambda()\) is constant. If \(x\) is an inhomogeneous Poisson process with intensity function \(\lambda(u)\), then \(K_{\mbox{\scriptsize\rm inhom}}(r) = \pi r^2\).
Given a point pattern dataset, the inhomogeneous \(K\) function can be estimated essentially by summing the values \(1/(\lambda(x_i)\lambda(x_j))\) for all pairs of points \(x_i, x_j\) separated by a distance less than \(r\).
This allows us to inspect a point pattern for evidence of interpoint interactions after allowing for spatial inhomogeneity of the pattern. Values \(K_{\mbox{\scriptsize\rm inhom}}(r) > \pi r^2\) are suggestive of clustering.
The argument
lambda should supply the (estimated) values of the intensity function \(\lambda\). It may be either
a numeric vector
containing the values of the intensity function at the points of the pattern
X.
a pixel image
(object of class
"im") assumed to contain the values of the intensity function at all locations in the window.
a fitted point process model
(object of class
"ppm",
"kppm"or
"dppm") whose fitted
trendcan be used as the fitted intensity. (If
update=TRUEthe model will first be refitted to the data
Xbefore the trend is computed.)
a function
which can be evaluated to give values of the intensity at any locations.
omitted:
if
lambdais omitted, then it will be estimated using a `leave-one-out' kernel smoother.
If
lambda is a numeric vector, then its length should be equal to the number of points in the pattern
X. The value
lambda[i] is assumed to be the the (estimated) value of the intensity \(\lambda(x_i)\) for the point \(x_i\) of the pattern \(X\). Each value must be a positive number;
NA's are not allowed.
If
lambda is a pixel image, the domain of the image should cover the entire window of the point pattern. If it does not (which may occur near the boundary because of discretisation error), then the missing pixel values will be obtained by applying a Gaussian blur to
lambda using
blur, then looking up the values of this blurred image for the missing locations. (A warning will be issued in this case.)
If
lambda is a function, then it will be evaluated in the form
lambda(x,y) where
x and
y are vectors of coordinates of the points of
X. It should return a numeric vector with length equal to the number of points in
X.
If
lambda is omitted, then it will be estimated using a `leave-one-out' kernel smoother, as described in Baddeley, Moller and Waagepetersen (2000). The estimate
lambda[i] for the point
X[i] is computed by removing
X[i] from the point pattern, applying kernel smoothing to the remaining points using
density.ppp, and evaluating the smoothed intensity at the point
X[i]. The smoothing kernel bandwidth is controlled by the arguments
sigma and
varcov, which are passed to
density.ppp along with any extra arguments.
Edge corrections are used to correct bias in the estimation of \(K_{\mbox{\scriptsize\rm inhom}}\). Each edge-corrected estimate of \(K_{\mbox{\scriptsize\rm inhom}}(r)\) is of the form $$ \widehat K_{\mbox{\scriptsize\rm inhom}}(r) = (1/A) \sum_i \sum_j \frac{1\{d_{ij} \le r\} e(x_i,x_j,r)}{\lambda(x_i)\lambda(x_j)} $$ where
A is a constant denominator, \(d_{ij}\) is the distance between points \(x_i\) and \(x_j\), and \(e(x_i,x_j,r)\) is an edge correction factor. For the `border' correction, $$ e(x_i,x_j,r) = \frac{1(b_i > r)}{\sum_j 1(b_j > r)/\lambda(x_j)} $$ where \(b_i\) is the distance from \(x_i\) to the boundary of the window. For the `modified border' correction, $$ e(x_i,x_j,r) = \frac{1(b_i > r)}{\mbox{area}(W \ominus r)} $$ where \(W \ominus r\) is the eroded window obtained by trimming a margin of width \(r\) from the border of the original window. For the `translation' correction, $$ e(x_i,x_j,r) = \frac 1 {\mbox{area}(W \cap (W + (x_j - x_i)))} $$ and for the `isotropic' correction, $$ e(x_i,x_j,r) = \frac 1 {\mbox{area}(W) g(x_i,x_j)} $$ where \(g(x_i,x_j)\) is the fraction of the circumference of the circle with centre \(x_i\) and radius \(||x_i - x_j||\) which lies inside the window.
If
renormalise=TRUE (the default), then the estimates described above are multiplied by \(c^{\mbox{normpower}}\) where \( c = \mbox{area}(W)/\sum (1/\lambda(x_i)). \) This rescaling reduces the variability and bias of the estimate in small samples and in cases of very strong inhomogeneity. The default value of
normpower is 1 (for consistency with previous versions of spatstat) but the most sensible value is 2, which would correspond to rescaling the
lambda values so that \( \sum (1/\lambda(x_i)) = \mbox{area}(W). \)
If the point pattern
X contains more than about 1000 points, the isotropic and translation edge corrections can be computationally prohibitive. The computations for the border method are much faster, and are statistically efficient when there are large numbers of points. Accordingly, if the number of points in
X exceeds the threshold
nlarge, then only the border correction will be computed. Setting
nlarge=Inf or
correction="best" will prevent this from happening. Setting
nlarge=0 is equivalent to selecting only the border correction with
correction="border".
The pair correlation function can also be applied to the result of
Kinhom; see
pcf.
Value
An object of class
"fv" (see
fv.object).
Essentially a data frame containing at least the following columns,
the vector of values of the argument \(r\) at which \(K_{\mbox{\scriptsize\rm inhom}}(r)\) has been estimated
vector of values of \(\pi r^2\), the theoretical value of \(K_{\mbox{\scriptsize\rm inhom}}(r)\) for an inhomogeneous Poisson process
If ratio=TRUE then the return value also has two attributes called "numerator" and "denominator" which are "fv" objects containing the numerators and denominators of each estimate of K_{\mbox{\scriptsize\rm inhom}}(r)Kinhom(r).
References
Baddeley, A., Moller, J. and Waagepetersen, R. (2000) Non- and semiparametric estimation of interaction in inhomogeneous point patterns.
Statistica Neerlandica 54, 329--350. See Also Aliases Kinhom Examples
# NOT RUN { # inhomogeneous pattern of maples X <- unmark(split(lansing)$maple) # }# NOT RUN { # (1) intensity function estimated by model-fitting # Fit spatial trend: polynomial in x and y coordinates fit <- ppm(X, ~ polynom(x,y,2), Poisson()) # (a) predict intensity values at points themselves, # obtaining a vector of lambda values lambda <- predict(fit, locations=X, type="trend") # inhomogeneous K function Ki <- Kinhom(X, lambda) plot(Ki) # (b) predict intensity at all locations, # obtaining a pixel image lambda <- predict(fit, type="trend") Ki <- Kinhom(X, lambda) plot(Ki) # (2) intensity function estimated by heavy smoothing Ki <- Kinhom(X, sigma=0.1) plot(Ki) # (3) simulated data: known intensity function lamfun <- function(x,y) { 50 + 100 * x } # inhomogeneous Poisson process Y <- rpoispp(lamfun, 150, owin()) # inhomogeneous K function Ki <- Kinhom(Y, lamfun) plot(Ki) # How to make simulation envelopes: # Example shows method (2) # }# NOT RUN { smo <- density.ppp(X, sigma=0.1) Ken <- envelope(X, Kinhom, nsim=99, simulate=expression(rpoispp(smo)), sigma=0.1, correction="trans") plot(Ken) # }
Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2) |
Hello guys! I was wondering if you knew some books/articles that have a good introduction to convexity in the context of variational calculus (functional analysis). I was reading Young's "calculus of variations and optimal control theory" but I'm not that far into the book and I don't know if skipping chapters is a good idea.
I don't know of a good reference, but I'm pretty sure that just means that second derivatives have consistent signs over the region of interest. (That is certainly a sufficient condition for Legendre transforms.)
@dm__ yes have studied bells thm at length ~2 decades now. it might seem airtight and has stood the test of time over ½ century, but yet there is some fineprint/ loopholes that even phd physicists/ experts/ specialists are not all aware of. those who fervently believe like Bohm that no new physics will ever supercede QM are likely to be disappointed/ dashed, now or later...
oops lol typo bohm bohr
btw what is not widely appreciated either is that nonlocality can be an emergent property of a fairly simple classical system, it seems almost nobody has expanded this at length/ pushed it to its deepest extent. hint: harmonic oscillators + wave medium + coupling etc
But I have seen that the convexity is associated to minimizers/maximizers of the functional, whereas the sign second variation is not a sufficient condition for that. That kind of makes me think that those concepts are not equivalent in the case of functionals...
@dm__ generally think sampling "bias" is not completely ruled out by existing experiments. some of this goes back to CHSH 1969. there is unquestioned reliance on this papers formulation by most subsequent experiments. am not saying its wrong, think only that theres very subtle loophole(s) in it that havent yet been widely discovered. there are many other refs to look into for someone extremely motivated/ ambitious (such individuals are rare). en.wikipedia.org/wiki/CHSH_inequality
@dm__ it stands as a math proof ("based on certain assumptions"), have no objections. but its a thm aimed at physical reality. the translation into experiment requires extraordinary finesse, and the complex analysis starts with CHSH 1969. etc
While it's not something usual, I've noticed that sometimes people edit my question or answer with a more complex notation or incorrect information/formulas. While I don't think this is done with malicious intent, it has sometimes confused people when I'm either asking or explaining something, as...
@vzn what do you make of the most recent (2015) experiments? "In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder.
All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally."
@dm__ yes blogged on those. they are more airtight than previous experiments. but still seem based on CHSH. urge you to think deeply about CHSH in a way that physicists are not paying attention. ah, voila even wikipedia spells it out! amazing
> The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. > A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment.
↑ suspect entire general LHV theory of QM lurks in these loophole(s)! there has been very little attn focused in this area... :o
how about this for a radical idea? the hidden variables determine the probability of detection...! :o o_O
@vzn honest question, would there ever be an experiment that would fundamentally rule out nonlocality to you? and if so, what would that be? what would fundamentally show, in your opinion, that the universe is inherently local?
@dm__ my feeling is that something more can be milked out of bell experiments that has not been revealed so far. suppose that one could experimentally control the degree of violation, wouldnt that be extraordinary? and theoretically problematic? my feeling/ suspicion is that must be the case. it seems to relate to detector efficiency maybe. but anyway, do believe that nonlocality can be found in classical systems as an emergent property as stated...
if we go into detector efficiency, there is no end to that hole. and my beliefs have no weight. my suspicion is screaming absolutely not, as the classical is emergent from the quantum, not the other way around
@vzn have remained civil, but you are being quite immature and condescending. I'd urge you to put aside the human perspective and not insist that physical reality align with what you expect it to be. all the best
@dm__ ?!? no condescension intended...? am striving to be accurate with my words... you say your "beliefs have no weight," but your beliefs are essentially perfectly aligned with the establishment view...
Last night dream, introduced a strange reference frame based disease called Forced motion blindness. It is a strange eye disease where the lens is such that to the patient, anything stationary wrt the floor is moving forward in a certain direction, causing them have to keep walking to catch up with them. At the same time, the normal person think they are stationary wrt to floor. The result of this discrepancy is the patient kept bumping to the normal person. In order to not bump, the person has to walk at the apparent velocity as seen by the patient. The only known way to cure it is to remo…
And to make things even more confusing:
Such disease is never possible in real life, for it involves two incompatible realities to coexist and coinfluence in a pluralistic fashion. In particular, as seen by those not having the disease, the patient kept ran into the back of the normal person, but to the patient, he never ran into him and is walking normally
It seems my mind has gone f88888 up enough to envision two realities that with fundamentally incompatible observations, influencing each other in a consistent fashion
It seems my mind is getting more and more comfortable with dialetheia now
@vzn There's blatant nonlocality in Newtonian mechanics: gravity acts instantaneously. Eg, the force vector attracting the Earth to the Sun points to where the Sun is now, not where it was 500 seconds ago.
@Blue ASCII is a 7 bit encoding, so it can encode a maximum of 128 characters, but 32 of those codes are control codes, like line feed, carriage return, tab, etc. OTOH, there are various 8 bit encodings known as "extended ASCII", that have more characters. There are quite a few 8 bit encodings that are supersets of ASCII, so I'm wary of any encoding touted to be "the" extended ASCII.
If we have a system and we know all the degrees of freedom, we can find the Lagrangian of the dynamical system. What happens if we apply some non-conservative forces in the system? I mean how to deal with the Lagrangian, if we get any external non-conservative forces perturbs the system?Exampl...
@Blue I think now I probably know what you mean. Encoding is the way to store information in digital form; I think I have heard the professor talking about that in my undergraduate computer course, but I thought that is not very important in actually using a computer, so I didn't study that much. What I meant by use above is what you need to know to be able to use a computer, like you need to know LaTeX commands to type them.
@AvnishKabaj I have never had any of these symptoms after studying too much. When I have intensive studies, like preparing for an exam, after the exam, I feel a great wish to relax and don't want to study at all and just want to go somehwere to play crazily.
@bolbteppa the (quanta) article summary is nearly popsci writing by a nonexpert. specialists will understand the link to LHV theory re quoted section. havent read the scientific articles yet but think its likely they have further ref.
@PM2Ring yes so called "instantaneous action/ force at a distance" pondered as highly questionable bordering on suspicious by deep thinkers at the time. newtonian mechanics was/ is not entirely wrong. btw re gravity there are a lot of new ideas circulating wrt emergent theories that also seem to tie into GR + QM unification.
@Slereah No idea. I've never done Lagrangian mechanics for a living. When I've seen it used to describe nonconservative dynamics I have indeed generally thought that it looked pretty silly, but I can see how it could be useful. I don't know enough about the possible alternatives to tell whether there are "good" ways to do it. And I'm not sure there's a reasonable definition of "non-stupid way" out there.
← lol went to metaphysical fair sat, spent $20 for palm reading, enthusiastic response on my leadership + teaching + public speaking abilities, brought small tear to my eye... or maybe was just fighting infection o_O :P
How can I move a chat back to comments?In complying to the automated admonition to move comments to chat, I discovered that MathJax is was no longer rendered. This is unacceptable in this particular discussion. I therefore need to undo my action and move the chat back to comments.
hmmm... actually the reduced mass comes out of using the transformation to the center of mass and relative coordinates, which have nothing to do with Lagrangian... but I'll try to find a Newtonian reference.
One example is a spring of initial length $r_0$ with two masses $m_1$ and $m_2$ on the ends such that $r = r_2 - r_1$ is it's length at a given time $t$ - the force laws for the two ends are $m_1 \ddot{r}_1 = k (r - r_0)$ and $m_2 \ddot{r}_2 = - k (r - r_0)$ but since $r = r_2 - r_1$ it's more natural to subtract one from the other to get $\ddot{r} = - k (\frac{1}{m_1} + \frac{1}{m_2})(r - r_0)$ which makes it natural to define $\frac{1}{\mu} = \frac{1}{m_1} + \frac{1}{m_2}$ as a mass
since $\mu$ has the dimensions of mass and since then $\mu \ddot{r} = - k (r - r_0)$ is just like $F = ma$ for a single variable $r$ i.e. an spring with just one mass
@vzn It will be interesting if a de-scarring followed by a re scarring can be done in some way in a small region. Imagine being able to shift the wavefunction of a lab setup from one state to another thus undo the measurement, it could potentially give interesting results. Perhaps, more radically, the shifting between quantum universes may then become possible
You can still use Fermi to compute transition probabilities for the perturbation (if you can actually solve for the eigenstates of the interacting system, which I don't know if you can), but there's no simple human-readable interpretation of these states anymore
@Secret when you say that, it reminds me of the no cloning thm, which have always been somewhat dubious/ suspicious of. it seems like theyve already experimentally disproved the no cloning thm in some sense. |
This is a good question. Unfortunately there are several criteria by which chemists indentify whether a process is melting or not. One of them is called the
Lindemann Criteria which says:
"Crystals are considered to melt, when the vibrational amplitude becomes half of the interatomic spacing in the crystal lattice."
What does it mean? Normally, at temperatures greater than 0K, atoms possess kinetic energy. Infact,
temperature itself is a measure of the kinetic energy of the constituent atoms. The kinetic energy of an atom is related to the temperature as:
$$\text{Kinetic Energy}=\frac{3}{2}k\text{T}$$
where $k$ is a constant called Boltzmann Constant, with a value of $1.38×10^{-23}\text{J/mol}$. Atoms in a solid are characterized by their fixed mean positions. They only vibrate within a certain boundary around them. Since they have kinetic energy (and velocity as well), they tend to leave their current location, but repulsive forces from the other atoms push it back to its original position. In this way, you could consider atomic bonds like "little springs".
The extent to which the atom
displaces itself from the mean location is called it's vibrational amplitude. As we increase temperature, the atoms will have more velocity, and can consequently displace itself further from the mean position. Lindemann defines the melting temperature as the temperature at which the amplitude becomes half of the space between two adjacent atoms of the crystal.
Yet another criteria for defining the melting point is the
Born criteria that says:
"Crystals are considered to melt, when the shear modulus approaches zero"
You might wonder, what is this
shear modulus? It's infact, a measure of how much tangential stress a solid object can handle, and the tangential strain caused by it. Tangential forces on an object are forces that act parallel to the surface of the object, hence the name. See the picture below that shows a tangential force applied on an originally cuboidal object:
Tangential (or shearing) stress is defined as:
$$\sigma _{shear}=\frac{F_{shear}}{A}$$
where $F_{shear}$ is the tangential (shearing) force, and $A$ denotes area of surface in question.
Considering the picture above, shearing strain would be defined as:
$$\epsilon _{shear}=\frac{\Delta x}{h}$$
There exists a unique proportionality between the stress applied and strain produced, and is given by:
$$\frac{\sigma _{shear}}{\epsilon _{shear}}=S$$
where $S$ is called the
shear modulus. Its a constant for a given material and a given temperature.
In a physical sense, liquids are considered as substances that
cannot withstand a tangential stress. When a tangential stress is applied, liquids simply keep on increasing the strain, even under small stress. This itself is a useful way to consider when a solid melts, when its shear modulus becomes zero!
Now, let's consider how we consider melting for the different examples that you have asked for:
Diamond: Diamond has a structure of covalent bonds arranged tetrahedrally to each carbon. Each bond has it's characteristic bond length. Applying Lindemann condition over here would be considered advisable , so that melting point is where vibrations of carbon atoms is half of the $\ce{C-C}$ bond distance. Ultimately some of the $\ce{C-C}$ would break and the molten mixture would largely consist of radicals of varying sizes, that permit free movement between themselves.
Graphite: While each layer of graphite is only weakly bonded to the other layer by van der Waals' forces, each layer (called graphene) itself is a quite a large molecule. You can consider a liquid as a large number of small molecules, so clearly graphene doesn't look like this. To melt it, you would have to break some of the $\ce{C-C}$ bonds so that we can produce small molecules that can move freely. We apply Lindemann condition in this case. The molten mixture would be similar to that of diamond.
Branched polymers: Remember how I told to consider liquids as small molecules that can move freely sliding among themselves? We consider the same thing over here. The branch bonds are comparatively weaker than the rest of the bonds in the polymer, and these are the ones that will break when heated. As far as bond-breakings are concerned, we use the Lindemann condition. While the unbranched polymers are big molecules, they are still small enough to exhibit liquid character at the high temperature required to break the branched bonds. The molten mixture would consist of the straight-chain polymers in radical forms. |
This question already has an answer here:
I have a problem calculating the electrostatic potential energy.
I rely on these equations coming from mechanics:
\begin{equation} U_{B}-U_{A} = -W_{A \ \rightarrow \ B} (done\ by \ the \ field \ force) \end{equation}
\begin{equation} U_{B}-U_{A} = W_{A \ \rightarrow \ B} (done\ by \ the \ opposite \ force) \end{equation}
Work done by the coulomb force (field force) is:
\begin{equation} W= \int_{A}^{B} \! \vec{F}.\,\vec{dr} \end{equation}
According to the picture
\begin{equation} F = \frac{q_{1}q_{2}}{4\pi e_{o} x^{2}} \vec{i} \end{equation}
\begin{equation} \vec{dr} =- dx \vec{i} \end{equation}
Therefore:
\begin{equation} W= \int_{A}^{B} \! \vec{\frac{q_{1}q_{2}}{4\pi e_{o} x^{2}} \vec{i}}.\,(- dx \vec{i}) \end{equation}
let $B=r$ and A=$\infty$ be \begin{equation} W= -\int_{\infty}^{r} \! \frac{q_{1}q_{2}}{4\pi e_{o} x^{2}} \, dx \end{equation}
Let $B=r$ and A=$\infty$ be \begin{equation} W= \frac{q_{1}q_{2}}{4\pi e_{o} } (\frac{1}{x} from\ \infty \ to \ r ) \end{equation}
Then:
\begin{equation} W= \frac{q_{1}q_{2}}{4\pi e_{o} r} \end{equation}
When I put this result into equations at the top:
\begin{equation} U_{B}-U_{A} = -\frac{q_{1}q_{2}}{4\pi e_{o} r} \end{equation} As $U_{A} =0$ Finally: \begin{equation} U_{B} = -\frac{q_{1}q_{2}}{4\pi e_{o} r} \end{equation} It turned out the potential energy is negative, but it is suppose to be positive since a external force is putting energy into the system. I don't know where my mistake is! |
In a book by Wise and Manohar,
Heavy Quark Physics (pg 80), they discuss the limit
\begin{equation}\lim _{\lambda\rightarrow \infty} \lambda^{\,z\,(\epsilon)}\end{equation}
where $z$ is some function of an infinitesimal parameter, $\epsilon$. Then they say "as long as $z$ depends on $\epsilon$ in a way that allows one to analytically continue $z$ to negative values" then this limit is zero. I'm not very familiar with analytic continuation (other then the qualitative idea of what it means) but this seems very strange to me.
I understand why the paths of contour integrals can be morphed between one another (due to Residue theorem), but why should such arguments hold for limits as well? |
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... |
Sanaris's answer is a great, succinct list of what each term in the free energy expression stands for: I'm going to concentrate on the $T\,S$ term (which you likely find the most mysterious) and hopefully give a little more physical intuition. Let's also think of a chemical or other reaction, so that we can concretely talk about a system changing and thus making some of its internal energy $H=U+p\,V$ available for work.
The $T S$ term arises roughly from the energy that is needed to "fill up" the rotational, vibrational, translational and otherwise distractional thermal energies of the constituents of a system. Simplistically, you can kind of think of its being related to the idea that you must use some of the energy released to make sure that the reaction products are "filled up" with heat so that they are at the same temperature as the reactants. So the $T S$ term is related to, but not the same as, the notion of
heat capacity: let's look at this a bit further.
Why can't we get at all the energy $\Delta H$? Well, actually we
can in certain contrived circumstances. It's just that these circumstances are not useful for calculating how much energy we can practically get to. Let's think of the burning of hydrogen:
$$\rm H_2+\frac{1}{2} O_2\to H_2O;\quad\Delta H \approx 143{\rm kJ\,mol^{-1}}\tag{1}$$
This is a highly exothermic one, and also one of the reactions of choice if you want to throw three astronauts, fifty tons of kit and about a twentieth of the most advanced-economy-in-the-world’s-1960s GDP at the Moon.
The thing about one mole of $H_2O$ is that it can soak up less heat than the mole of $H_2$ and half a mole of $O_2$; naively this would seem to say that we can get
more heat than the enthalpy change $\Delta H$, but this is not so. We imagine a thought experiment, where we have a gigantic array of enormous heat pads (all individually like an equilibrium “outside world") representing all temperatures between absolute zero and $T_0$ with a very fine temperature spacing $\Delta T$ between them. On my darker days I find myself imagining an experimental kit that looks eerily like a huge pallet on wheels of mortuary shelves, sliding in and out as needed by the experimenter! We bring the reactants into contact with the first heat pad, which is at a temperature $T_1 = T_0 - \Delta T$ a teeny-tiny bit cooler than $T_0$ thus reversibly drawing some heat $\Delta Q(T_1)$ off into the heat pad. Next, we bring the reactants into contact with the second heat pad at temperature $T_2 = T_0 - 2\,\Delta T$, thus reversibly drawing heat $\Delta Q(T_2)$ off into that heat pad. We keep cooling then shifting to the next lowest heat pad until we have visited all the heat pads and thus sucked all the heat off into our heat pads: see my sketch below:
Now the reactants are at absolute zero. There is no heat needed to "fill them up" to their temperature, so we can extract
all the enthalpy $\Delta H$ from the reaction as useful work. Let's imagine we can put this work aside in some ultrafuturistic perfect capacitor, or some such lossless storage for the time being.
Now we must heat our reaction products back up to standard temperature, so that we know what we can get out of our reaction if the conditions do not change. So, we simply do the reverse, as sketched below:
Notice that I said that $H_2O$ soaks up less heat than the reactants. This means that, as we heat the products back up to standard temperature, we take from the heat pads
less heat in warming up the water than we put into them in cooling the reactants down.
So far, so good. We have gotten
all the work $\Delta H$ out into our ultracapacitor without losing any! And we're back to our beginning conditions, or so it seems! What's the catch?
The experimental apparatus that let us pull this trick off is NOT back at its beginning state. We have left heat in the heat pads. We have thus degraded them: they have warmed up ever so slightly and so cannot be used indefinitely to repeatedly do this trick. If we tried to do the trick too many times, eventually the heat pads would be at ambient temperature and would not work any more.
So we haven’t reckoned the free energy at the standard conditions, rather we have simply calculated the free energy $\Delta H$ available in the presence of our unrealistic heat sink array. To restore the system to its beginning state and calculate what work we could get if there were no heat sink array here, we must take away the nett heat flows we added to all the heat pads and send them into the outside World at temperature $T_0$. This is the only "fair" measure, because it represents something that we could do with arbitrarily large quantities of reactants.
But the outside World at $T_0$ is warmer than any of the heat pads, so of course this heat transfer can’t happen spontaneously, simply by dent of Carnot’s statement of the second law!
We must therefore bring in a reversible heat pump and
use some of our work $\Delta H$ to pump this heat into the outside world to restore standard conditions: we would connect an ideal reversible heat pump to each of the heat pads in turn and restore them to their beginning conditions, as sketched below:
This part of the work that we use to run the heat pump and restore all the heat pads, if you do all the maths, is exactly the $T\,\Delta S$ term.
The above is a mechanism whereby the following statement in Jabirali's Answer holds:
Processes that increase the Gibbs free energy can be shown to increase the entropy of the system plus its surroundings, and will therefore be prevented by the second law of thermodynamics.
The nice thing about the above is that it is a great way to look at
endothermic reactions. In an endothermic reaction, we imagine having an energy bank that we can borrow from temporarily. After we have brought the products back up to temperature, we find we have both borrowed $-\Delta H$ from the energy bank and put less heat back into the heat pads than we took from them. So heat can now flow spontaneously from the environment to the heat pads to restore their beginning state, because the heat pads are all at a lower temperature than the environment. As this heat flows, we can use a reversible heat engine to extract work from the heat flowing down the gradient. This work is, again, $-T\,\Delta S$, which is a positive work gotten from the heat flowing down the temperature gradient. The $-T\,\Delta S$ can be so positive that we can pay back the $\Delta H$ we borrowed and have some left over. If so, we have an endothermic reaction, and a nett free energy: this energy coming from the heat flowing spontaneously inwards from the environment to fill the higher entropy products (higher than the entropy of the reactants).
Take heed that, in the above, I have implicitly assumed the Nernst Heat Postulate -the not quite correct third law of thermodynamics - see my answer here for more details. For the present discussion, this approximate law is well good enough. |
Consider the following problem, from Bjork's
Arbitrage Theory in Continuous Time:
Consider the standard Black-Scholes model. Derive the arbitrage free price process for the $T$-claim $\mathcal{X}$ where $\mathcal{X}$ is given by $\mathcal{X}=\{S(T)\}^\beta$. Here $\beta$ is a known constant.
My approach.
Let $F(t,s)$ be the price of the claim $\mathcal{X}$ at time $t$, when the underlying spot price is $s$.
The Black-Scholes equation for $F$ is: $$ \begin{align} F_t + rsF_s + \frac12 \sigma^2s^2 F_{ss} - rF &= 0 \\ F(T, S(T)) &= S(T)^\beta. \end{align} $$
It is convenient to make a change of variables of the form $\tilde{F}(t,s) = e^{-rt}F(t,s)$, so that the associated stochastic process is: $$ \begin{align} dX &= rX dt + \sigma X dW \\ X(t) &= s. \end{align} $$
After changing variable to $Y = \log X$ and integrating, I find $$ X(T) = s \exp\left((r-\frac12 \sigma^2)(T-t) + \sigma(W(T) - W(t))\right). $$ So by the Feynman-Kac formula I have: $$ \begin{align} F(t,s) &= e^{-r(T-t)}\mathbb{E}\left[X(T)^\beta\right] \\ &= e^{-r(T-t)}\int_{-\infty}^{\infty}s^\beta e^{\beta z} \exp\left(-\frac12 \frac{(z - (r-\frac12\sigma^2)(T-t))^2}{\sigma^2(T-t)}\right) dz, \end{align} $$ which after some computation gives, if I did not make any mistake: $$ F(t,s)=e^{-r(T-t)}s^\beta \exp\left(\frac12\sigma^2\beta^2(T-t) + (r-\frac12\sigma^2)\beta(T-t)\right). $$
Does it sound right?
Also, regardless of whether the pricing formula is correct, I am not sure if what I found is really the arbitrage free stochastic process for $\mathcal{X}$. |
Calculational Exercises
1. Let \(n \in \mathbb{Z}_+\) be a positive integer, let \(w_0 , w_1 ,\ldots, w_n \in \mathbb{C}\) be distinct complex numbers, and let \(z_0 , z_1 ,\ldots, z_n \in \mathbb{C}\) be any complex numbers. Then one can prove that there is a unique polynomial \(p(z)\) of degree at most \(n\) such that, for each \(k \in \{0, 1, . . . , n\}, p(w_k ) = z_k.\)
(a) Find the unique polynomial of degree at most \(2\) that satisfies \(p(0) = 0, p(1) = 1,\) and \(p(2) = 2.\)
(b) Can your result in Part (a) be easily generalized to find the unique polynomial of degree at most \(n\) satisfying \(p(0) = 0, p(1) = 1, \ldots , p(n) = n\)?
2. Given any complex number \(\alpha \in \mathbb{C},\) show that the coefficients of the polynomial
\[(z − \alpha)(z − \bar{\alpha})\]
are real numbers.
Proof-Writing Exercises
1. Let \(m, n \in \mathbb{Z}_+\) be positive integers with \(m \leq n\). Prove that there is a degree n polynomial \(p(z)\) with complex coefficients such that \(p(z)\) has exactly m distinct roots.
2. Given a polynomial \(p(z) = a_n z^n + \cdots + a_1 z + a_0\) with complex coefficients, define the
conjugate of \(p(z)\) to be the new polynomial
\[ \bar{p}(z) = \bar{a_n} z^n + \cdots + \bar{a_1}z + a_0. \]
(a) Prove that \(\bar{p(z)} = \bar{p}(\bar{z}).\)
(b) Prove that \(p(z)\) has real coefficients if and only if \(\bar{p}(z) = p(z).\) (c) Given polynomials \(p(z), q(z),\) and \(r(z)\) such that \(p(z) = q(z)r(z),\) prove that \(\bar{p}(z) = \bar{q}(z)\bar{r}(z).\) 3. Let \(p(z)\) be a polynomial with real coefficients, and let \( \alpha \in \mathbb{C}\) be a complex number. Prove that \(p(\alpha) = 0\) if and only \(p(\bar{\alpha}) = 0.\) |
robot picking up plants
The end effector of these pneumatic picker-uppers consists of a shovel-shaped set of needles. The gripper takes advantage of the increased density of the plant's rootball (relative to the rest of the soil) so that compressive force is tuned just enough to hold onto the plant and not crush the roots.
The gripper has to overcome adhesive forces (wall of the tray or soil sticking to the seedling) as well as gravity. The end effector has three needles, so each needle needs to handle
$F sin \alpha = G/3 + A/3 - F_{friction} cos \alpha$
where $F_{friction} = \mu \cdot \frac{G+A}{3(\mu cos \alpha + sin \alpha)}$
$\alpha$ being needle angle, $A$ being adhesion force, $\mu$ being the coefficient of friction.
A pansy plug weighs approximately 20g, and the coefficient of friction can be assumed similar to snow ~0.2. Adhesion force between water and plastic is estimated to be 4N, and gripper force is estimated to be 5N.
The tradeoff between plug forces and friction sets $\alpha$ to be somewhere around 10$^\circ$. |
You are right in that gravity did not change during data collection. You are a victim of uncertainty, which is a very important part of experimental physics. I'm sorry in advance for the "wall of text", and I hope that this clears up some confusion.
The problem is that $1.50$ may not be
exactly $1.500000000...$. Because the numbers are provided rounded, they are not exact and you have lost information. Imagine that my watch can only report time to the nearest second and my measuring tape only reports to the nearest meter. If I measure a car's movement and it moves 2.3 meters in 0.8 seconds (true measurements, 2.875 m/s) I am required to round all my data to the nearest round number (2 meters in 1 second, 2.0 m/s). So, if I calculate a number based on my rounded data, it won't perfectly reflect reality because my data do not perfectly reflect reality. Even though your numbers are more precise than my example (accurate down to hundredths of a second and centimeters), there is still some amount of uncertainty in the numbers.
Feel free to skip to the "What it all means" section. The stuff after this is pretty dry and obtuse.
Quantitative Explanation
Note that I'm going to call displacement $x$ and time $t$, just for convenience.
Let's have a look at a subset of the data as an example:
Time (s) | Displacement (m)
1.50 | 3.09
2.00 | 6.60
You know the displacement to two decimal places-- that means that, for $t = 1.50$, $x$ could be anywhere from $3.085$ to $3.094999... \approx 3.095$, and it would still be okay to call it $3.09$, as long as you're rounding to that number of significant digits. Similarly, the time might not be exactly $1.5$! So, if you calculate anything based on those numbers, there's a certain amount of uncertainty in the result. Since there are two variables ($t$ and $x$), they can both vary at once. With the equation $v = (x_f - x_i)/ \Delta t = $, $v$ is biggest when $\Delta x$ is maximized and $\Delta t$ is minimized (dividing by a smaller number yields a larger number), and $v$ is smallest when $\Delta x$ is smallest and $\Delta t$ is biggest. We don't know the true, unrounded values of the two variables, so any possible combination of them is just as good as any other. Here's the range of possible velocities based on the above displacements:
$t = 1.5 \rightarrow 2.0$:
$$ v_{max} = \frac{6.605 - 3.085}{1.995 - 1.505} \approx 7.18 \, \mathrm{m/s}$$
$$ v_{mid} = \frac{6.600 - 3.090}{2.000 - 1.500} = 7.02 \, \mathrm{m/s}$$
$$ v_{min} = \frac{6.595 - 3.095}{2.005 - 1.495} \approx 6.86 \, \mathrm{m/s}$$
Quite a range! Let's do it again for the next time so that we can get a range of accelerations:
$t = 2.0 \rightarrow 2.5$:
$$ v_{max} = \frac{9.695 - 6.595}{2.495 - 2.005} \approx 6.36\, \mathrm{m/s}$$
$$ v_{mid} = \frac{9.690 - 6.600}{2.500 - 2.000} = 6.18 \, \mathrm{m/s}$$
$$ v_{min} = \frac{9.685 - 6.605}{2.505 - 1.995} \approx 6.04 \, \mathrm{m/s}$$
Again, quite a range. Now, to find the max,min accelerations possible for your data, you take the same approach. For $a_{max}$, divide the biggest possible $\Delta v$ by the smallest possible $\Delta t$, and the reverse for the min. For the above numbers, you get accelerations like so:
$$ a_{min} = \frac{6.36 - 6.86}{2.505 - 1.995} \approx -0.98 \, \mathrm{m/s}^2$$
$$ a_{mid} = \frac{6.18 - 7.02}{2.5 - 2} = -1.68 \, \mathrm{m/s}^2$$
$$ a_{max} = \frac{6.04 - 7.18}{2.495 - 2.005} \approx -2.33 \, \mathrm{m/s}^2$$
where "min" and "max" are used (somewhat sloppily) to mean "greatest in magnitude", not the true meanings of "minimum" and "maximum". If I haven't messed up the numbers, the intermediate accelerations could be anywhere in the above range!
What it all means: Now you should be able to recognize that, because your accelerations are derived from numbers of limited certainty and they fall within the range given above, they cannot really be said to be different-- they are said to be "within uncertainty" of each other. It would be appropriate to take the average of all your values and call that the best guess you have of the acceleration. If you wanted to be a true scientist about it, you might have a go at calculating the uncertainty in acceleration, which is nontrivial, or you could go the lazy route (like a lot of scientists :)) and use something like the standard error of all your acceleration values. Either way, the end-all is that the acceleration didn't change during the data collection, it only looks like it did because you didn't measure position and time precisely enough :)
If you're interested in reducing the uncertainty, read on!
These numbers are not acceptable, the range is too big. The problem is that our uncertainties ($0.005 \mathrm{m}$ and $0.005 \mathrm{s}$) are very large compared to our numbers: we're using time steps of half a second, but we have an uncertainty of $2 \cdot 0.005 = 0.01$ in each variable (twice the uncertainty because you're taking the difference of two values). You can reduce the effect by evaluating the difference across more than one time step (ie find the acceleration from $t=0 \rightarrow 1$ instead of $0 \rightarrow 0.5$. This way, the change in time (and displacement) is bigger, but the uncertainty remains the same, so it impacts the result less! |
Table of Contents
The nonlocal problem for the differential-operator equation of the even order with the involution Article References Ya. O. Baranetskij, P. I. Kalenyuk, L. I. Kolyasa, M. I. Kopach 109-119
On the convergence criterion for branched continued fractions with independent variables Article References R. I. Dmytryshyn 120-127
Some classes of dispersible dcsl-graphs Article References J. Jinto, K. A. Germina, P. Shaini 128-133
First Reformulated Zagreb Indices of Some Classes of Graphs Article References V. Kaladevi, R Murugesan, K. Pattabiraman 134-144
Parabolic by Shilov systems with variable coefficients Article References V. A. Litovchenko 145-153
On meromorphically starlike functions of order $\alpha$ and type $\beta$, which satisfy Shah's differential equation Article References Yu. S. Trukhan, O. M. Mulyava 154-162
$FG$-coupled fixed point theorems in cone metric spaces Article References E. Prajisha, P. Shaini 163-170
Some fixed point results in complete generalized metric spaces Article References S. M. Sangurlu, D. Turkoglu 171-180
On the growth of a composition of entire functions Article References M. M. Sheremeta 181-187
Skew semi-invariant submanifolds of generalized quasi-Sasakian manifolds Article References M. D. Siddiqi, A. Haseeb, M. Ahmad 188-197
Metric on the spectrum of the algebra of entire symmetric functions of bounded type on the complex $L_\infty$ Article References T. V. Vasylyshyn 198-201
Faithful group actions and Schreier graphs Article References M. Fedorova 202-207
The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported. |
: There is a simple algorithm that runs in time $O(n \log n)$ and finds the inversion vector of a given array. Furthermore, there is a time lower bound of $ \Omega (n \log n) $ for any comparison-based algorithm for this problem, based on a reduction to the sorting problem. I have never read the paper Yuval Filmus referenced to in his answer, but from a brief reading it seems that the operations that the data structure permits are not exactly what you need in order to implement an algorithm for computing the inversion vector. TL;DR
: The lower bound
Edit: As D.W. mentioned, the $ Ω(nlogn) $ lower bound only holds for comparison-based algorithms. There exist sorting algorithms whose running time is $o(nlogn)$, by going outside the comparison-based model.
Suppose by negativity that there exists an algorithm $\mathcal A$ that, given an array of elements $X$ and its length $n$, returns their inversion vector $Y$ in time $o (n\log n)$.Given the assumed algorithm $\mathcal A$, we present a sorting algorithm.
Sorting algorithm:
Input: an array $X$ and its length $n$. In order to simplify the proof, we assume that all of the elements in $X$ are unique, i.e. that no element appears more than once in $X$.
Output: an array that consists of the elements in $X$, in a sorted order.
Compute $Y_1 = \mathcal A (X,n)$. Intuitively, $Y_1 [i]$ contains the number of elements in $X$ that are larger than $X[i]$ and "to its right". Generate the array $R = reverse(X)$, i.e. $R[i] = X[n-i]$ for every $i \in [n]$. Compute $Y_2 = \mathcal A (R,n)$. Intuitively, $Y_1 [i]$ contains the number of elements in $R$ that are larger than $R[i]$ and "to its right", i.e. the number of elements in $X$ that are larger than $X[n-i]$ and "to its left". Generate a new array $I$ that satisfies: $I[i] = Y_1 [i] + Y_2 [n-i] + 1$ for every $i\in [n]$. Intuitively, $I[i]$ is the number of elements that are greater than or equal to $X[i]$ in $X$, i.e. the position in which we should put $X[i]$ in the output array. Generate the output array: $O[I[i]] = X[i]$ for every $i\in [n]$.
The correctness of the algorithm is immediate. As to its complexity, steps 2,4 and 5 take $O(n)$, and steps 1 and 3 take $o(n \log n)$. We were therefore able to sort an array in time $o(n\log n)$, which contradicts the known lower bound of $\Omega(n\log n)$ for sorting.
: An algorithm that runs in time $O(n\log n)$ Create an empty balanced binary search tree, e.g. an AVL tree or a red-black tree. Each node in the tree will hold additional information of the number of elements in its subtree. This data can be easily maintained in insertions and deletions. Go over the input array from right to left, and for each element $X[i]$: Insert $X[i]$ to the tree. While doing so, use the additional information in the nodes in order to compute the number of elements in the tree that are larger than $X[i]$, and save the result in $O[i]$. Finally, update the additional information along the insertion path in the tree and re-balance the tree. Output the array $O$.
The correctness of the algorithm follows from the fact that we insert the elements to the tree from the rightmost element in the array to the leftmost element in the array. Therefore, when inserting the element in the $i$'th position to the tree, the elements that are currently in the tree are all of the elements to its right, and hence the computation in step 2 returns the number of elements in the array that are larger than the $i$'th element and to its right, as desired.
As for the complexity of the algorithm, since the height of the tree is $O(\log n)$, the algorithm runs in time $O(n \log n)$.
Hope that helps :) |
After assembling the vertical axis, all that was left was bolting on the wooden desktop and testing.
pretty computer on pretty desk
Between assembly and testing the HobbyShop staff started storing stuff on my desk...
HobbyShop staff trust my desk as a shelf.
Powersupply below. Foam cup only holds screws (no liquid don't worry)
I conducted some repeatability abbe-error tests with a laser pointer. The laser fixture was clamped to the desk and pointed towards the wall, 1m away (there was a brick column in the way!).
Laserpointer assembly clamped to the center of the desk.
The test shown below has the desk commanded to move between two set points 14cm apart. The laser-projected locations of the lower setpoint were marked on a piece of paper, from which positioning error can be determined from average deviation. This positioning error (unweighted) ended up being an average of 2.39mm over the 1m distance, so 0.33mm error over the 14cm travel.
This desk therefore has an unweighted positioning error of 2.4 microns per mm-travel.
An unweighted desk is an unused desk. The desk-requirements allow for an unweighted desk when moving, but it still needs to hold stuff.
I grabbed 2.5lb (1.134kg) and 10lb (4.536kg) weights and observed their effects when placed on different areas of the desk relative to the rail-ballscrew shafts.
2.5lb weight at (-430, 420)
Experiment with 20lbs at (0, 250)
Coordinate system for desk tests
After adding HDPE skids (see vertical axis build post), I experimentally determined max yaw displacement by pushing on the corner of the desk until it hit the hardstops. These projected errors were +19.83mm and -22.24mm for a vaguely 10lbf push.
Using the same Abbe error equations as the previous linear axis testing,
$\alpha_{pitch} = \frac{\delta_y}{L}$
and
$err_{pitch} = \frac{\delta_y y}{L+y}$
$\alpha_{roll} = \frac{\delta_x}{L}$
and
$err_{roll} = \frac{\delta_x x}{L+x}$
where I'm making the approximation that vertical displacement only comes from pitch and horizontal displacement from roll since I have so few samples and since the measurements are reasonably close within sets.
Using these error calculations, I calculated pitch, roll, and yaw stiffnesses of the desk, where roll stiffness > pitch > yaw by approximately an order of magnitude each.
I also did some qualitative testing, and discovered that my system is too low-friction and too backdrivable for my motor to support loads as predicted by the error spreadsheet. That's kinda expected, given that my system uses a high-pitch ballscrew.
However, an unfortunate consequence of this is that a load of ~7lbs (laptop + 2.5lb weight) is the most this actuator can take while traveling up and down at speed, and it sounds terrible doing it. I could have tuned the system to run at a lower speed, but this is somewhat difficult to do with my software setup. Soooo... meh.
Desk happilly traveling with 20N loads, then getting upset at 30N load.
Even small dynamic loads lower the desk.
The desk holds laptop and legs in static-loading just fine
Adding a gearbox between my motorshaft and the ballscrew would help make my desk sittable, but in its current setup I doubt it would hold close to bodyweight before either my motordriver overheats or my rails break. So, no-go on the "Real Desk" functional requirement.
The desk does hold at least static-108N (laptop + 20lb weights) without failing, so it does meet the class calculation-standards with 100N loads (albeit barely; it probably wouldn't meet the expected 2x safety factor).
Circling back to the original error predictions for a 100N load, I had expected to get 0.23mm displacement from the theoretical desk. Instead, I got 2.87 - an order of magnitude higher.
What went wrong?
Searching through the error spreadsheet, I found a problem with my model.
Linear stiffness of the carriage in the model equals bearing stiffness
I had included flexures in my carriage to account for parallelism-errors with the rails, and had discovered that the rail-shafts bend before the flexures do when I was assembling the vertical axis. And that makes sense - the rails are only 8mm in diameter and have an unsupported length of ~350mm, whereas the ball-bushings are set in a thick block of aluminum.
I had considered shaft stiffness before (in that post, actually), but at the time I was only concerned with whether deflections approached yield stress. Returning back to those calculations, and changing rotation and linear stiffnesses in the spreadsheet to be an average of ballscrew and rail shaft stiffnesses, I get some more reasonable results.
Shaft compliance calculations
new results. Note the F = kX displacement.
Reality only matches models when the models are accurate - shaft stiffness is where my order of magnitude discrepancy came from.
That's it for the 2.70 desk!
Desk being a desk. |
Recall that Maxwell's equations (in the absence of losses) require only that $n^2 = \epsilon\mu$. So when you take the square root, you are mathematically allowed to take either the positive or the negative square root.
Of course, then the question becomes, why would you
want to take the negative square root? Clearly, this is only an issue if $\epsilon\mu > 0$; if $\epsilon\mu < 0$ (ie, if one is negative and one is positive), then the index of refraction becomes imaginary. That has its own set of complications, which is outside the scope of your question. The original paper by Veselago that Mew cites in a comment discusses the consequences of $\epsilon < 0, \mu < 0$, and explains why using $n < 0$ makes sense in that case.
Recall one of the steps in the derivation of the wave equation (Veselago gives these as as equation 5):$$\mathbf{k}\times\mathbf{E} = \frac{\omega}{c}\mu \mathbf{H} \\\mathbf{k}\times\mathbf{H} = -\frac{\omega}{c}\epsilon \mathbf{E}$$These two equations define the handedness of the $\{ \mathbf{E}, \mathbf{H}, \mathbf{k}\}$ system: if $\epsilon, \mu > 0$, then these three vectors form a right-handed set. If $\epsilon, \mu < 0$, then these three vectors form a
left-handed set. Why does that matter? Because the Poynting vector $\mathbf{S}$, which gives the energy flux, points in the direction of $\mathbf{E}\times\mathbf{H}$, so the triplet $\{\mathbf{E}, \mathbf{H}, \mathbf{S}\}$ always forms a right-handed set. As we saw from above, if $\epsilon, \mu > 0$, then the wavevector $\mathbf{k}$ points in the same direction as $\mathbf{S}$. But if $\epsilon, \mu < 0$, then $\mathbf{k}$ points in the direction opposite to $\mathbf{S}$. The direction of the wave vector gives you the direction of the phase velocity, and the direction of the Poynting vector gives you the direction of the group velocity. The fact that those two point in opposite directions requires the index of refraction to be negative.
So that's why $\epsilon < 0, \mu < 0$ gives you a negative refractive index. What are the consequences? There are three major consequences Veselago gives (though he gives them in a different order):
refraction is reversed when passing between substances of $n<$ and $n>0$. The light refracts away from the normal when passing from a medium of lower $|n|$ to a medium of higher $|n|$, rather than refracting towards the normal. the Doppler effect is reversed. Instead of frequency increasing when the source moves towards the observer (and decreasing when the source moves away), frequency decreases when the source moves towards the observer. Cerenkov radiation points in a different direction. Instead of propagating at an acute angle $\theta $ relative to the direction of $\mathbf{k}$, it propagates at an obtuse angle $\theta$ relative to $\mathbf{k}$. |
There are really two questions here: what did you do wrong, and how do you do it right? What you did wrong was mostly related to your statement of the fundamental theorem of arithmetic. You should have
$$N=\prod_{i=1}^\infty p_i^{k_i}$$
where $k_i$ are nonnegative integers, all but finitely many equal to zero (so this is really a finite product), and $p_i$ are the sequence of all primes (so each one is different). Note that these are being multiplied, not summed. Then your divisor count is $\prod_{i=1}^\infty (k_i+1)$ which is equivalent to what you wrote.
The important thing here is that the $p_i$ need to be distinct; $2^1 \cdot 2^1 \cdot 2^1$ has $4$ divisors, not $16$. The other important thing is that if all you do is throw in another prime factor, you have the most impact when it is a new prime factor that $N$ isn't already divisible by. Specifically you double the number of divisors. Whereas if $k_i=1$, say, then making it $2$ only increases the number of divisors by a factor of $3/2$.
A crude upper bound on the smallest $N$ with at least $M$ divisors is $p_m \#$, where $m=\lceil \log_2(M) \rceil$ and $p\#$ is the primorial function, the product of all primes less than or equal to $p$. Then $p_m\#$ will be a product of $m$ distinct primes, so it will have $2^m \geq M$ divisors. In your example, $m=9$ so $p_m\#=2 \cdot 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 \cdot 17 \cdot 19 \cdot 23=223092870$, which has 512 divisors.
Now the point is that you can do better by giving the small primes bigger exponents than the large primes. For example, you can take out the factor of 23 and increase the exponent on $2$ to $3$. Previously, $k_9+1$ and $k_1+1$ were both $2$, now they are $1$ and $4$ respectively, so the number of divisors has stayed the same. But $N$ has shrunk, because $4<23$.
You can do this again with $19$ and $3$.
It no longer does you any good to do it with $17$ and $5$, because $25>17$, but that doesn't mean that you have found the optimal solution, you just have to modify more $k_i$ at the same time in order to find a better solution. In particular, since $2=(4/3)(3/2)$, you can make $k_1=4,k_3=2$ and $k_7=0$, obtaining $2^4 \cdot 3^3 \cdot 5^2 \cdot 7 \cdot 11 \cdot 13$ which again has 512 divisors, and is smaller because $10<17$.
I think this number, $10810800$, is actually optimal for the problem of finding the smallest number with at least 500 divisors. This number is actually considerably smaller than your solution, but that is because I've allowed for the freedom to have
more than 500 divisors, whereas it appears that in your case you must have exactly 500 divisors. This will actually force $N$ to be a fair bit larger because $500$ is evenly divisible by $5$ $3$ times, so you are forced to have three exponents of $4$ (or one of $4$ and one of $24$, or one of $124$). This means the exponents of $3$ and $5$ have to go up to $4$, which multiplies the number by $75$. That's partially canceled out by removing the factor of $13$, but only partially. |
In 1+1D Ising model with a transverse field defined by the Hamiltonian
\begin{equation}
H(J,h)=-J\sum_i\sigma^z_i\sigma_{i+1}^z-h\sum_i\sigma_i^x
\end{equation}
There is a duality transformation which defines new Pauli operators $\mu^x_i$ and $\mu^z_i$ in a dual lattice
\begin{equation}
\mu_i^z=\prod_{j\leq i}\sigma^x_j
\qquad
\mu_i^x=\sigma^z_{i+1}\sigma^z_{i}
\end{equation}
then these $\mu_i^x$ and $\mu_i^z$ satisfy the same commutation and anti-commutation relations of $\sigma^x_i$ and $\sigma^z_i$, and the original Hamiltonian can be written in terms of $\mu_i^x$ and $\mu_i^z$ as
\begin{equation}
H(J,h)=-J\sum_i\mu_i^x-h\sum_i\mu_i^z\mu_{i+1}^z
\end{equation}
At this stage, many textbooks will tell us since $\sigma$'s and $\mu$'s have the same algebra relations, the right hand side of the last equation is nothing but $H(h,J)$. My confusions are
1) Does that the operators having the same algebra really imply that $H(J,h)$ and $H(h,J)$ have the same spectrum? We know for a given algebra we can have different representations and these different representations may give different results. For example, the angular momentum algebra is always the same, but we can have different eigenvalues of spin operators.
2) This is related to the first confusion. Instead of looking at the algebra of the new operators, we can also look at how the states transform under this duality transformation. In the eigenbasis of $\mu_i^x$, if I really consider it as a simple Pauli matrix, the state $|\rightarrow\rangle$ corresponds to two states in the original picture, i.e. $|\uparrow\uparrow\rangle$ and $|\downarrow\downarrow\rangle$. The same for state $|\leftarrow\rangle$. In the $\mu_i^z$ basis, the correspondence is more complicated. A state corresponds to many states in the original picture, and the number of the corresponding states depend on the position of this state. Therefore, this duality transformation is not unitary, which makes me doubt whether $H(J,h)$ and $H(h,J)$ should have the same spectrum. Further, what other implication may this observation lead to? For example, doing one duality transformation is a many-to-one correspondence, then doing it back should still be a many-to-one correspondence, then can we recover the original spectrum?
3) Another observation is we in the above $\mu_i^z$ involves a string of operators on the left side, we can equally define it in terms of a string of operators on the right side, so it seems there is an unobservable string. What implication can this observation lead to? Is this unobservable string related to the unobservable strings in Levin-Wen model? |
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
Underpinnings of Mass Action: The Ideal Gas The Ideal Gas: The basis for "mass action" and a window into free-energy/work relations
The simplest possible multi-particle system, the ideal gas, is a surprisingly valuable tool for gaining insight into biological systems - from mass-action models to gradient-driven transporters. The word "ideal" really means non-interacting, so in an ideal gas all molecules behave as if no others are present. The gas molecules only feel a force from the walls of their container, which merely redirects their momenta like billiard balls. Not surprisingly, it is possible to do exact calculations fairly simply under such extreme assumptions. What's amazing is how relevant those calculations turn out to be, particularly for understanding the basic mechanisms of biological machines and chemical-reaction systems.
Although ideal particles do not react or bind, their statistical/thermodynamic behavior in the various states (e.g., bound or not, reacted or not) can be used to build powerful models - e.g., for transporters.
Mass-action kinetics are ideal-gas kinetics
The key assumption behind mass-action models is that events (binding, reactions, ...) occur precisely in proportion to the concentration(s) of the participating molecules. This certainly cannot be true for
all concentrations, because all molecules interact with one another at close enough distances - i.e., at high enough concentrations. In reality, beyond a certain concentration, simple crowding effects due to steric/excluded-volume effects mean that each molecule can have only a maximum number of neighbors.
But in the ideal gas - and in mass-action kinetics - no such crowding effects occur. All molecules are treated as point particles. They do not interact with one another, although virtual/effective interactions occur in a mass-action picture. (We can say these interactions are "virtual" because the only effect is to change the number of particles - no true forces or interactions occur.)
Pressure and work in an ideal gas
Ideal gases can perform work directly using pressure. The molecules of an ideal gas exert a pressure on the walls of the container holding them due to collisions, as sketched above. The amount of this pressure depends on the number of molecules colliding with each unit area of the wall per second, as well as the speed of these collisions. These quantities can be calculated based on the mass $m$ of each molecule, the total number of molecules, $N$, the total volume of the container $V$ and the temperature, $T$. In turn, $T$ determines the average speed via the relation $(3/2) \, N \, k_B T = \avg{(1/2) \, m \, v^2}$. See the book by Zuckerman for more details.
We can calculate the
work done by an ideal gas to change the size of its container by pushing one wall a distance $d$ as shown above. We use the basic rule of physics that work is force ($f$) multiplied by distance and the definition of pressure as force per unit area. If we denote the area of the wall by $A$, we have
If $d$ is small enough so that the pressure is nearly constant, we can calculate $P$ using (1) at either the beginning or end of the expansion. More generally, for a volume change of arbitrary size (from $V_i$ to $V_f$) in an ideal gas, we need to integrate:
which assumes the expansion is performed slowly enough so that (1) applies throughout the process.
Free energy and work in an ideal gas
The free energy of the ideal gas can be calculated exactly in the limit of large $N$ (see below). We will see that it does, in fact, correlate precisely with the expression for work just derived. The free energy depends on temperature, volume, and the number of molecules; for large $N$, it is given by
where $\lambda$ is a constant for fixed temperature. For reference, it is given by $\lambda = h / \sqrt{2 \pi m k_B T}$ with $h$ being Planck's constant and $m$ the mass of an atom. See the book by Zuckerman for full details.
Does the free energy tell us anything about work? If we examine the
free energy change occurring during the same expansion as above, from $V_i$ to $V_f$ at constant $T$, we get
Comparing to (3),
this is exactly the negative of the work done! In other words, the free energy of the ideal gas decreases by exactly the amount of work done (when the expansion is performed slowly). More generally, the work can be no greater than the free energy decrease. The ideal gas has allowed us to demonstrate this principle concretely. The ideal gas free energy from statistical mechanics
The free energy is derived from the "partition function" $Z$, which is simply a sum/integral over Boltzmann factors for all possible configurations/states of a system. Summing over all possibilities is why the free energy encompasses the full thermodynamic behavior of a system.
where $\lambda(T) \propto 1/\sqrt(T)$ is the thermal de Broglie wavelength (which is not important for the phenomena of interest here), $\rall$ is the set of $(x,y,z)$ coordinates for all molecules and $U$ is the potential energy function. The factor $1/N!$ accounts for interchangeability of identical molecules, and the integral is over all volume allowed to each molecule. For more information, see the book by Zuckerman, or any statistical mechanics book.
The partition function can be evaluated exactly for the case of the ideal gas because the non-interaction assumption can be formulated as $U(\rall) = 0$ for all configurations - in other words, the locations of the molecules do not change the energy or lead to forces. This makes the Boltzmann factor exactly $1$ for all $\rall$, and so each molecule's inegration over the full volume yields a factor of $V$, making the final result
Although (8) assumes there are no degrees of freedom internal to the molecule - which might be more reasonable in some cases (ions) than others (flexible molecules) - the expression is sufficient for most of the biophysical explorations undertaken here.
Underpinnings of Mass Action: The Ideal Gas The Ideal Gas: The basis for "mass action" and a window into free-energy/work relations
The simplest possible multi-particle system, the ideal gas, is a surprisingly valuable tool for gaining insight into biological systems - from mass-action models to gradient-driven transporters. The word "ideal" really means non-interacting, so in an ideal gas all molecules behave as if no others are present. The gas molecules only feel a force from the walls of their container, which merely redirects their momenta like billiard balls. Not surprisingly, it is possible to do exact calculations fairly simply under such extreme assumptions. What's amazing is how relevant those calculations turn out to be, particularly for understanding the basic mechanisms of biological machines and chemical-reaction systems.
Although ideal particles do not react or bind, their statistical/thermodynamic behavior in the various states (e.g., bound or not, reacted or not) can be used to build powerful models - e.g., for transporters.
Mass-action kinetics are ideal-gas kinetics
The key assumption behind mass-action models is that events (binding, reactions, ...) occur precisely in proportion to the concentration(s) of the participating molecules. This certainly cannot be true for
all concentrations, because all molecules interact with one another at close enough distances - i.e., at high enough concentrations. In reality, beyond a certain concentration, simple crowding effects due to steric/excluded-volume effects mean that each molecule can have only a maximum number of neighbors.
But in the ideal gas - and in mass-action kinetics - no such crowding effects occur. All molecules are treated as point particles. They do not interact with one another, although virtual/effective interactions occur in a mass-action picture. (We can say these interactions are "virtual" because the only effect is to change the number of particles - no true forces or interactions occur.)
Pressure and work in an ideal gas
Ideal gases can perform work directly using pressure. The molecules of an ideal gas exert a pressure on the walls of the container holding them due to collisions, as sketched above. The amount of this pressure depends on the number of molecules colliding with each unit area of the wall per second, as well as the speed of these collisions. These quantities can be calculated based on the mass $m$ of each molecule, the total number of molecules, $N$, the total volume of the container $V$ and the temperature, $T$. In turn, $T$ determines the average speed via the relation $(3/2) \, N \, k_B T = \avg{(1/2) \, m \, v^2}$. See the book by Zuckerman for more details.
We can calculate the
work done by an ideal gas to change the size of its container by pushing one wall a distance $d$ as shown above. We use the basic rule of physics that work is force ($f$) multiplied by distance and the definition of pressure as force per unit area. If we denote the area of the wall by $A$, we have
If $d$ is small enough so that the pressure is nearly constant, we can calculate $P$ using (1) at either the beginning or end of the expansion. More generally, for a volume change of arbitrary size (from $V_i$ to $V_f$) in an ideal gas, we need to integrate:
which assumes the expansion is performed slowly enough so that (1) applies throughout the process.
Free energy and work in an ideal gas
The free energy of the ideal gas can be calculated exactly in the limit of large $N$ (see below). We will see that it does, in fact, correlate precisely with the expression for work just derived. The free energy depends on temperature, volume, and the number of molecules; for large $N$, it is given by
where $\lambda$ is a constant for fixed temperature. For reference, it is given by $\lambda = h / \sqrt{2 \pi m k_B T}$ with $h$ being Planck's constant and $m$ the mass of an atom. See the book by Zuckerman for full details.
Does the free energy tell us anything about work? If we examine the
free energy change occurring during the same expansion as above, from $V_i$ to $V_f$ at constant $T$, we get
Comparing to (3),
this is exactly the negative of the work done! In other words, the free energy of the ideal gas decreases by exactly the amount of work done (when the expansion is performed slowly). More generally, the work can be no greater than the free energy decrease. The ideal gas has allowed us to demonstrate this principle concretely. The ideal gas free energy from statistical mechanics
The free energy is derived from the "partition function" $Z$, which is simply a sum/integral over Boltzmann factors for all possible configurations/states of a system. Summing over all possibilities is why the free energy encompasses the full thermodynamic behavior of a system.
where $\lambda(T) \propto 1/\sqrt(T)$ is the thermal de Broglie wavelength (which is not important for the phenomena of interest here), $\rall$ is the set of $(x,y,z)$ coordinates for all molecules and $U$ is the potential energy function. The factor $1/N!$ accounts for interchangeability of identical molecules, and the integral is over all volume allowed to each molecule. For more information, see the book by Zuckerman, or any statistical mechanics book.
The partition function can be evaluated exactly for the case of the ideal gas because the non-interaction assumption can be formulated as $U(\rall) = 0$ for all configurations - in other words, the locations of the molecules do not change the energy or lead to forces. This makes the Boltzmann factor exactly $1$ for all $\rall$, and so each molecule's inegration over the full volume yields a factor of $V$, making the final result
Although (8) assumes there are no degrees of freedom internal to the molecule - which might be more reasonable in some cases (ions) than others (flexible molecules) - the expression is sufficient for most of the biophysical explorations undertaken here. |
Help:Editing Math Equations using TeX This is how you edit math equations using the TeX syntax to make nice looking equations. Please use TeX when writing math. Trying to put equations directly into the text doesn't look very nice and TeX is very easy to learn.
If you already use TeX, then all you need to know is that your normal syntax must be surrounded by tags:
For HTML rendering if possible <math> syntax </math> For forced TeX rendering ( which produces an image ) <math> syntax ~</math> <math> syntex \,\!</math> <math> syntax \,</math>
| The above forced rendering are explained in Forced PNG rendering
If you've never used TeX before, here's a crash course:
Contents Fractions
To make a fraction use:
\frac{foo}{bar}
These render as:
Superscript and Subscript
To make superscripts and subscripts, use:
x^2 y_0
These render as:
Greek Letters
To make greek letters, you just need to know their names. Use a capital for the capital letter, a lower-case for the lower-case letter:
\pi \theta \omega \Omega \gamma \Gamma \alpha \beta
These render as like:
Trig Stuff
You can use the following to render trig functions without italics so it looks nicer:
\cos(\theta) \sin(\theta) \tan(\theta)
These render as: , as opposed to:
cos(\theta) sin(\theta) tan(\theta)
Which look like:
In general, if you want to make words not in italics in math, use
\text{foo bar}
Which looks like instead of
Big Brackets
Usually, a normal parenthesis or bracket will do fine:
x^2 (2x + y)
Renders as:
But if you have big stuff like fractions, it doesn't always look as nice:
(\frac{\pi}{2})
renders as:
Instead, use:
x^2 \left(2x + y\right) \left( \frac{\pi}{2} \right)
Which looks like:
Notice that these kinds of brackets are always the right size. They also work with square brackets:
\left[ \frac{\pi}{2} \right] \left[ x^2 (2x + y) \right]
Renders as:
Other
These are some other things that may be useful. In general, if you want to know how to make a type of symbol, you can find many usefuly lists by searching for TeX or LaTeX math symbols.
\sqrt{foo}
\int_a^b f(x)dx
\pm
\mp
\approx
One thing you may notice is that adding spaces between symbols will not add more space in the rendered output.
a_0 a_1 a_0 a_1
These will both render as:
If you want to force spacing between symbols, you have to use the "~" symbol:
a_0~~~~~~~~~a_1
renders as:
Also, it can be useful to note that there as two kinds of math font that the wiki will try to use. One is a smaller font which fits better into a line of normal text, and the other is a larger font which looks nicer when you have an equation on its own line. Whenever you make somthing "big" like a fraction or square root symbol, the wiki will automatically use bigger font. Sometimes you may want to force a line to be bigger because it looks nicer. I have found no nice way to do this other than it just so happens that if you put a "~" at the end of a line that line will be rendered in the bigger font; since the "~" is at the end you won't notice that there is technically an extra blank space there.
A\cos(\omega t)</math> A\cos(\omega t)~</math>
These will render as:
Lastly, don't be afraid to nest things together to make really complicated looking expressions:
v_2 = \sqrt{\frac{2\left(P_1 - P_2\right)}{\rho \left(1 - {\left(\frac{A_2}{A_1}\right)}^2\right)}}
will render as: |
Quenching of the E1 strength in 149Nd 64 Downloads Citations Abstract.
Lifetime measurements of excited states in
149Nd have been performed using the advanced time-delayed \( \beta\) \( \gamma\) \( \gamma\)( t) method. Half-lives of 14 excited states in 149Nd have been determined for the first time or measured with higher precision. Twelve new \( \gamma\) -lines and 5 new levels have been introduced into the decay scheme of 149Pr based on results of the \( \gamma\) \( \gamma\) coincidence measurements. Reduced transition probabilities have been determined for 40 \( \gamma\) -transitions in 149Nd . Configuration assignments for 6 rotational bands in 149Nd are proposed. Enhanced E1 transitions indicate that the ground-state band and the band built on the 332.9keV level constitute a pair of the \(\ensuremath K^{\pi} = 5/2^{\pm}\) parity doublet bands. Potential energy surfaces on the \(\ensuremath (\beta_{2},\beta_{3})\) -plane have been calculated for the lowest single quasi-particle configurations in 149Nd using the Strutinski method and the axially deformed Woods-Saxon potential. The predicted occurrence of the octupole-deformed K = 5/2 configuration is in agreement with experiment. Unexpectedly low \(\ensuremath \vert D_0\vert\) values obtained for the \(\ensuremath K^{\pi} = 5/2^{\pm}\) parity doublet bands may result from cancellation between the proton and neutron shell correction contributions to \(\ensuremath \vert D_0\vert\) . KeywordsRotational Band Reduce Transition Probability Octupole Deformation Potential Energy Surface Calculation 149Nd Nucleus Preview
Unable to display preview. Download preview PDF.
References 1. 2. 3. 4. 5. 6.R. Ibbotsonn, C.A. White, T. Czosnyka, P.A. Butler, N. Clarkson, D. Cline, R.A. Cunningham, M. Devlin, K.G. Helmer, T.H. Hoare, J.R. Hughes, G.D. Jones, A.E. Kavka, B. Kotliński, R.J. Poynter, P. Regan, E.G. Vogt, R. Wadsworth, D.L. Watson, C.Y. Wu, Phys. Rev. Lett. 71, 1990 (1993)CrossRefADSGoogle Scholar 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.L. Haugen, J.R. Lien, G. L, Can. J. Phys. 59, 1183 (1981)Google Scholar 17. 18. 19.J.K. Hwang, A.V. Ramayya, J.H. Hamilton, J. Kormicki, L.K. Peker, B.R.S. Babu, T.N. Ginter, G.M. Ter-Akopian, Yu.Ts. Oganessian, A.V. Daniel, W.C. Ma, P.G. Varmette, S.J. Asztalos, S.Y. Chu, K.E. Gregorich, I.Y. Lee, A.O. Macchiavelli, R.W. MacLeod, J.O. Rasmussen, J.D. Cole, R. Aryaeinejad, K. Butler-Moore, Y.X. Dardenne, M.W. Drigert, M.A. Stoyer, J.F. Wild, J.A. Becker, L.A. Bernstein, R.W. Lougheed, K.J. Moody, R. Donandelo, S.G. Prussin, H.C. Griffin, Int. J. Mod. Phys. E 6, 331 (1997)CrossRefADSGoogle Scholar 20. 21.R.B. Firestone, S.Y.F. Chu, C.M. Baglin, Table of Isotopes CD-ROM, 8th edition, 1999 update (Wiley Interscience, 1999)Google Scholar 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32.G. Alaga, K. Alder, A. Bohr, B. Mottelson, Mat. Fys. Medd. Dan. Vid. Selsk. 29, 9 (1955)Google Scholar 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. |
I understand what a geodesic is, but I'm struggling to understand the meaning of the geodesic flow (as defined e.g. by
Do Carmo, Riemannian Geometry, page 63).
I can state my confusion in two different ways:
1)
Do Carmo writes:
Why does a geodesic $\gamma$ uniquely define a vector field
on an open subset? In other words, why are the values of the vector fields uniquely defined on those points that are not on the geodesic $\gamma$? 2)
In local coordinates, the geodesic flow is defined as the solution to the ordinary differential equation
$$ \tag{1}\frac{d^2 x_k}{dt^2}+\sum_{i,j}\Gamma^k_{ij}\frac{dx_i}{dt}\frac{dx_j}{dt}=0 $$
For the solution to be unique on $TM$ (or on an open subset), we need some boundary condition. The only boundary condition I can see is a given geodesic $\gamma(t)$.
What are the boundary conditions for this ODE? |
Let us first recall some definitions and useful formulas for a surface in $\mathbb{R}^3$ given by an immersion $F\colon U \rightarrow \mathbb{R}^3$, where $U \subset \mathbb{R}^2$ is an open set.
Denote $F_x = \frac{\partial F}{\partial{x}}$, $F_{x x} = \frac{\partial{F_x}}{\partial{x}}$, and so on.
The components of the first fundamental form are $I_{x x} = F_x \cdot F_x$, $I_{y y}=F_y \cdot F_y$, $I_{x y} = I_{y x} = F_x \cdot F_y$
For the second fundamental form we may use the expression for the unit normal vector$$n = \frac{F_x \times F_y}{|F_x \times F_y|}$$so that $II_{x x} = n \cdot F_{x x}$, $II_{y y} = n \cdot F_{y y}$, $II_{x y} = II_{y x} = n \cdot F_{x y}$
The Gaussian curvature $K(x,y)$ has the following expression$$K(x,y) = \frac{II_{x x} II_{y y} - II_{x y}^2}{I_{x x} I_{y y} - I_{x y}^2} \tag{1}$$and the mean curvature can be computed by$$H(x,y) = \frac{I_{x x} II_{y y} - 2 I_{x y} II_{x y} + I_{y y} II_{x x}}{I_{x x} I_{y y } - I_{x y}^2} \tag{2}$$In the proposed problem we a given a surface represented by a graph of function $z = f(x) + g(y)$, so our immersion has the following form:$$F(x,y) = \begin{pmatrix}x \\y \\f(x) + g(y) \end{pmatrix}$$and we calculate $F_x = \begin{pmatrix} 1 \\ 0 \\ f' \end{pmatrix}$, $F_y = \begin{pmatrix} 0 \\ 1 \\ g' \end{pmatrix}$, $F_{x x} = \begin{pmatrix} 0 \\ 0 \\ f'' \end{pmatrix}$, $F_{y y} = \begin{pmatrix} 0 \\ 0 \\ g'' \end{pmatrix}$, $F_{x y} = \begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}$.
This is enough to find the unit normal$$n = \frac{(f', -g', 1)^T}{\sqrt{1 + (f')^2 + (g')^2}}$$so we find the components of the second fundamental form $II_{x x} = \frac{f''}{\sqrt{1 + (f')^2 + (g')^2}}$, $II_{y y} = \frac{g''}{\sqrt{1 + (f')^2 + (g')^2}}$, $II_{x y} = 0$
The components of the first fundamental form are, of course, $I_{x x} = 1 + (f')^2$, $I_{y y} = 1 + (g')^2$, and $I_{x y} = f'g'$.
I would let you to finish the job by substituting these quantities into equations (1) and (2). |
Greig Cowan Dr G A Cowan Research interests LHCb The LHCb experiment at CERN is searching for new physics through precision measurements of the properties of heavy quarks.
Quarks are the fundamental building blocks of the protons and neutrons which make up atomic nuclei. The study of heavy quarks has a long and illustrious history, and led to many important discoveries and the award of the 2008 Nobel prize in physics to the theorists who first wrote down the mathematics of CP-violation in the SM. The LHCb experiment is continuing this legacy by studying heavy quarks in unprecedented detail due to the huge size of the event samples that it can record, process and analyse at the CERN LHC.
Other experiments at the LHC have yet to find any direct evidence of new physics beyond the SM: they are currently pushing the energy boundaries of their searches into the multi-TeV scale. LHCb can indirectly probe to much higher energies via the presence of non-SM particles in the quantum virtual loops of the heavy-quark decay processes.
My research aims to perform the precise measurements of CP-violating and heavy-quark properties. To do this requires us to have a deep understanding of the reconstruction and selection of many different processes which occur in the LHC, requiring the use of clever algorithms and modern computing technology to help us dig out signals from the large background noise. This technology will be used in the next generation of grid/cloud computing, with all the potential for innovation that it brings. We have to understand the subtle effects which our experimental apparatus can have on the measurements, a task which only becomes more difficult as the size of the data sample grows.
Hyper-Kamiokande The Hyper-Kamiokande (Hyper-K) experiment is the next generation flagship facility for the study of neutrino oscillations, nucleon decays, and astrophysical neutrinos.
Hyper-K is a third generation underground water Cherenkov detector situated in Kamioka, Japan. It consists of a 1 million tonne water target, which is about 20 times larger than that of the existing Super-Kamiokande (Super-K) detector. It will serve as the far detector for a long baseline neutrino oscillation experiment planned for the upgraded J-PARC proton synchrotron beam. With a total exposure of 7.50 MW x 107s to the 2.5degree off-axis neutrino beam, Hyper-K aims to make a measurement of the CP (charge-parity) violating phase of the neutrino mixing matrix, δCP, and to determine the neutrino mass hierarchy through the study of atmospheric neutrinos. It is expected that the CP phase δCP can be determined to better than 19degrees for all possible values of δCP and CP violation can be established with a statistical significance of 3(5)σ for 76(58)% of all possible values of δCP. Hyper-K will also serve as a detector capable of observing proton decays, atmospheric neutrinos, and neutrinos from astronomical origins enabling measurements that far exceed the current world best measurements.
We are currently performing R&D studies for the design of the proposed TITUS intermediate detector of Hyper-K. In addition we are characteristing new hybrid photo-detectors that are candidates for use in TITUS and Hyper-K.
Teaching assistant/lecturer for the Junior Honours "Numerical Methods" course. Teaching assistant for the Junior Honours "Research Methods" course. Teaching assistant for the Junior Honours "Data acquisition and handling" course. Recent publications Observation of a Narrow Pentaquark State, Pc (4312)+, and of the Two-Peak Structure of the Pc (4450)+ DOI, Physical Review Letters, 122, 22 Observation of $B^0_{(s)} \to J/\psi p \overline{p}$ decays and precision measurements of the $B^0_{(s)}$ masses DOI, Physical Review Letters, 122, p. 191804 Search for $CP$ violation in $D_s^+\to K_S^0 \pi^+$, $D^+\to K_S^0 K^+$ and $D^+\to \phi \pi^+$ decays DOI, Physical Review Letters, 122, p. 191803 Journal of High Energy Physics, 1905 Physical Review Letters, 122, 19, p. 191801 |
1Department of Pure Mathematics, Faculty of Mathematical Sciences, University of Guilan, Iran.
2Department of Pure Mathematics, Faculty of Mathematical Sciences, University of Guilan, Iran.
Receive Date: 29 April 2015,Revise Date: 30 January 2016,Accept Date: 07 April 2016
Abstract
Let $(\Sigma_P,\sigma_P)$ be the space of a spacing shifts where $P\subset \mathbb{N}_0=\mathbb{N}\cup\{0\}$ and $\Sigma_P=\{s\in\{0,1\}^{\mathbb{N}_0}: s_i=s_j=1 \mbox{ if } |i-j|\in P \cup\{0\}\}$ and $\sigma_P$ the shift map. We will show that $\Sigma_P$ is mixing if and only if it has almost specification property with at least two periodic points. Moreover, we show that if $h(\sigma_P)=0$, then $\Sigma_P$ is almost specified and if $h(\sigma_P)>0$ and $\Sigma_P$ is almost specified, then it is weak mixing. Also, some sufficient conditions for a coded $\Sigma_P$ being renewal or uniquely decipherable is given. At last we will show that here are only two conjugacies from a transitive $\Sigma_P$ to a subshift of $\{0,1\}^{\mathbb{N}_0}$.
D. Ahmadi and M. Dabbaghian, Characterization of spacing shifts with positive topological entropy, Acta Math. Univ. Comenian. 81 (2012), no. 2, 221--226.
J. Banks, T.T.D. Nguyen, P. Oprocha, B. Stanley and B. Trotta, Dynamics of spacing shifts, Discrete Contin. Dyn. Syst. 33 (2013), no. 9, 4207--4232.
J. Banks, P. Oprocha and B. Stanley, Transitive sofic spacing shifts, Discrete Contin. Dyn. Syst. 35 (2015), no. 10, 4734--4764.
V. Bergelson and T. Downarowicz, Large sets of integers and hierarchy of mixing properties of measure-preserving systems, Colloq. Math. 110 (2008), no. 1, 117--150.
V. Bergelson, N. Hindman and R. McCutchen, Notions of size and combinatorial properties of quotient sets in semigroups , Proceedings of the 1998 Topology and Dynamics Conference, Topology Proc. 23 (1998) 23--60.
F. Blanchard and A. Maass, Topics in Symbolic Dynamics and Ar pplications, Cambridge Univ. Press, Cambridge, 2000.
M. Boyle and S. Tuncel, Infinite-to-one codes and Markov measures, Trans. Amer. Math. Soc. 285 (1984), no. 2, 657--684.
H. Furstenberg, Recurrence in Ergodic Theory and Ccombinatorial Number Theory, Princeton Univ. Press, Princeton, NJr , 1981.
J. Goldberger, D. Lind and M. Smorodinsky, The entropies of renewal systems, Israel J. Math. 75 (1991), no. 1, 49--64.
S. Hong and S. Shin, The entropies and periods of renewal systems, Israel J. Math. 172 (2009) 9--27.
M. Kulczycki, D. Kwietniak and P. Oprocha, On almost specification and average shadowing properties, Fund. Math. 224 (2014), no. 3, 241--278.
D. Kwietniak, Topological entropy and distributional chaos in hereditary shifts with applications to spacing shifts and beta shifts, Discrete Contin. Dyn. Syst. 33 (2013), no. 6, 2451--2467.
D. Kwietniak and P. Oprocha, A note on the average shadowing property for expansive maps, Topology Appl. 159 (2011), no. 1, 19--27.
K. Lau and A. Zame, On weak mixing of cascades, Math. Systems Theory 6 (1973) 307--311.
D. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding, Cambridge Univ. Press, Cambridge, 1995.
R. McCutcheon, Three results in recurrence, Ergodic theory and its connections with harmonic analysis (Alexandria, 1993), 349--358, London Math. Soc. Lecture Note Ser., 205, Cambridge Univ. Press, Cambridge, 1995.
K. Petersen, On the topological entropy of saturated sets, Ergodic Theory Dynam. Systems 27 (2007) 929--956.
K. Sigmund, On dynamical systems with the specification property, Trans. Amer. Math. Soc. 190 (1974) 285--299.
P. Walters, On the pseudo orbit tracing property and its relationship to stability, in: The Structure of Attractors in Dynamical Systems (Proc. Conf. North Dakota State Univ. Fargo, ND, 1977), pp. 231--244, Lecture Notes in Math. 668, Springer, Berlin, 1978. |
https://doi.org/10.1351/goldbook.B00746
The term applies to either of the equations: \[\frac{k_{\text{HA}}}{p} = G\left ( \frac{q\ K_{\text{HA}}}{p} \right )^{\alpha}\] \[\frac{k_{\text{A}}}{q} = G\left ( \frac{q\ K_{\text{HA}}}{p} \right )^{-\beta} \] (or their logarithmic forms) where \(\alpha\), \(\beta\) and \(G\) are constants for a given reaction series (\(\alpha\) and \(\beta\) are called 'Brønsted exponents'), \(k_{\text{HA}}\) and \(k_{\text{A}}\) are @C00885@ (or rate coefficients) of reactions whose rates depend on the concentrations of HA and/or of A
−. \(K_{\text{HA}}\) is the acid @D01801@ constant of the acid HA, \(p\) is the number of equivalent acidic protons in the acid HA, and \(q\) is the number of equivalent basic sites in its conjugate base A −. The chosen values of \(p\) and \(q\) should always be specified. (The charge designations of H and A are only illustrative.) The Brønsted relation is often termed the 'Brønsted @C00875-1@' (or the '@C00875-2@'). Although justifiable on historical grounds, this name is not recommended, since Brønsted relations are known to apply to many uncatalysed and pseudo-catalysed reactions (such as simple @P04915@). The term 'pseudo-Brønsted relation' is sometimes used for reactions which involve @N04250@ instead of acid–base @C00874@. Various types of Brønsted parameters have been proposed such as \(\beta_{\text{lg}}\), \(\beta_{\text{nuc}}\), \(\beta_{\text{eq}}\) for @L03493@, nucleophile and equilibrium constants, respectively. See also:
linear free-energy relation |
The problem is that you have not solved the question yet. What you have found is
not the friction between the boxes. It is something else. As you actually state yourself, you have instead found the maximum [static] friction. This is just the maximum possible value and not at all necessarily equal to the actual friction. Static friction can be anything from $0$ to this maximum limit of $80\:\mathrm{N}$ that you found.
In math-terms you have looked at static friction $f_s$ with this expression:
$$f_s\leq n \mu_s$$
There is only an equal sign here, if you are looking for
maximum static friction. If you are just looking for static friction, you cannot use this. Always, when you have a force that you don't have a formula for, then use Newtons laws to find it:
Newtons 2nd law on the top block:
$$\sum F_x=m_{top}a\quad\Leftrightarrow\quad F-f_s=m_{top}a\quad\Leftrightarrow\quad f_s=F-m_{top}a$$
Newtons 2nd law on the bottom block:
$$\sum F_x=m_{bottom}a\quad\Leftrightarrow\quad f_s=m_{bottom}a \quad\Leftrightarrow\quad a=\frac{f_s}{m_{bottom}}$$
The accelerations $a$ are equal if this is static friction (if they are kept together). This is two equations with only two unknowns. If you put in numbers and solve them you get the right result.
You have in your question shown that the maximum static friction is $80\:\mathrm{N}$, so if our result here is larger, then we know that we do
not have static but rather kinetic friction between them. Then you could easily have found the result without Newtons laws. Because in that case it must be kinetic friction $f_k$ between the boxes (as this is the only other possibility) since they slide over each other. And you do have a clear formula for kinetic friction:
$$f_k=n\mu_k$$
Note that this is another $\mu$ than before (typically $\mu_k$ is smaller than $\mu_s$) |
When you write the five dimensional Kaluza-Klein metric tensor as
$$ g_{mn} = \left( \begin{array}{cc} g_{\mu\nu} & g_{\mu 5} \\ g_{5\nu} & g_{55}\\ \end{array} \right) $$
where $g_{\mu\nu}$ corresponds to the ordinary four dimensional metric and $ g_{\mu 5}$ is the ordinary four dimensional vector potetial, $g_{55}$ appears as an additional scalar field. This new scalar field, called a dilaton field, IS physically meaningful, since it defines the size of the 5th additional dimension in Kaluza-Klein theory. They are natural in every theory that hase compactified dimensions. Even though such fields have up to now not been experimentally confirmed it is wrong to call such a field "unphysical".
"Unphysical" are in some cases fields introduced to rewrite the transformation determinant in calculations of certain generating functionals, or the additional fields needed to make an action local, which may have conversely to such dilaton field, no well defined physical meaning. |
I think @ADG has provided a nice summary of when it is and isn't acceptable to post answers involving CAS. CAS is a lovely tool that I certainly use to check my hand-derived results and sometimes to get around tedious algebra that isn't the entire point of a problem.
However, CAS can be downright misleading, if not thoroughly disconcerting, if used mindlessly, even if technically correct. I'll discuss a real example here on M.SE.
The problem concerns a double integration. Really, the trick to analytical evaluation lies in a change in the order of integration. That is where the thinking is. Maybe a CAS can recognize the thought pattern and produce the correct answer. I don't know of one, however. All I know is what happened when someone (a Maple salesperson?) answered the question with Maple I/O.
So, I reproduce the pure CAS answer:
$$-1/12\,{\frac {2\,{\mbox{$_3$F$_2$}(1/6,1/2,1/2;\,7/6,3/2;\,1)}\Gamma \left( 5/6 \right) \Gamma \left( 2/3 \right) -{\pi }^{3/2}}{\Gamma \left( 5/6 \right) \Gamma \left( 2/3 \right) }}$$
To the inexperienced reader trying to learn something, this is enough to discourage. Seriously, if you were struggling in Calc III and were presented with this answer, wouldn't you be tempted to give up?
The sad part is that the answer is quite correct, numerically. But we have generalized hypergeometric and ugly-looking gammas. That integral must be so very hard!
This is why CAS-only solutions are unacceptable in many cases, even if the OP only asked for the result of evaluating the integral. There is a level of thought - at this time, human thought - that the problem deserves, and that someone posting an answer at M.SE needs to describe. The OP needs to be taught to recognize that a change in order of integration can reduce some of these double integrals to simple single integrals.
In this case, as the accepted solution explains, the double integral evaluates to $\pi/24$. That's it. I don't care if the CAS solution agrees with this somehow, either numerically or through a complicated series of identities; the CAS has failed to present the answer in a useful form. It, and any answer like it that favors mindlessness and I/O over understanding and exposition, should be downvoted thoroughly. |
CryptoDB Paper: Pairing-Friendly Elliptic Curves of Prime Order
Authors: Paulo S. L. M. Barreto Michael Naehrig Download: URL: http://eprint.iacr.org/2005/133 Search ePrint Search Google Abstract: Previously known techniques to construct pairing-friendly curves of prime or near-prime order are restricted to embedding degree $k \leqslant 6$. More general methods produce curves over $\F_p$ where the bit length of $p$ is often twice as large as that of the order $r$ of the subgroup with embedding degree $k$; the best published results achieve $\rho \equiv \log(p)/\log(r) \sim 5/4$. In this paper we make the first step towards surpassing these limitations by describing a method to construct elliptic curves of prime order and embedding degree $k = 12$. The new curves lead to very efficient implementation: non-pairing cryptosystem operations only need $\F_p$ and $\F_{p^2}$ arithmetic, and pairing values can be compressed to one \emph{sixth} of their length in a way compatible with point reduction techniques. We also discuss the role of large CM discriminants $D$ to minimize $\rho$; in particular, for embedding degree $k = 2q$ where $q$ is prime we show that the ability to handle $\log(D)/\log(r) \sim (q-3)/(q-1)$ enables building curves with $\rho \sim q/(q-1)$. BibTeX @misc{eprint-2005-12469,
title={Pairing-Friendly Elliptic Curves of Prime Order},
booktitle={IACR Eprint archive},
keywords={public-key cryptography / elliptic curves, pairing-based cryptosystems},
url={http://eprint.iacr.org/2005/133},
note={Revised version presented at SAC'2005 and published in LNCS 3897, pp. 319--331, Springer, 2006. [email protected] 13207 received 8 May 2005, last revised 28 Feb 2006},
author={Paulo S. L. M. Barreto and Michael Naehrig},
year=2005
} |
Thank you for using the timer!We noticed you are actually not timing your practice. Click the START button first next time you use the timer.There are many benefits to timing your practice, including:
Re: Two semi-circles are drawn on adjacent sides of a square with side len[#permalink]
Show Tags
23 Jul 2019, 08:46
3
Two semi-circles are drawn on adjacent sides of a square with side length 4 as shown above. What is the area of the shaded region?
I divided the square into 4 small squares.
Square #1 (I quadrant) - its area is 2*2=4 and we need to count it. Squares #2 and #4 (II quadrant and III quadrant) if we put them together then we can see semicircle and we need to calculate eternal area 2*4 - (Pi*2^2)/2=8-2Pi Square #3 we don't need to count this area is the answer.
Re: Two semi-circles are drawn on adjacent sides of a square with side len[#permalink]
Show Tags
23 Jul 2019, 08:49
2
Consider only one semi-circle Its area will \(\frac{\pi}{2} *2^2\) area of the triangle will \(\frac{1}{2} *2\sqrt{2}*2\sqrt{2}\) thus area of those two leaves will be = \(\frac{\pi}{2} *2^2\) - \(\frac{1}{2} *2\sqrt{2}*2\sqrt{2}\) = \(2\pi - 4\)
Area of two semicircles without overall = \(4\pi\) - \(2\pi + 4\) = \(2\pi +4\) ----- eq 1
Total area of square = \(4^2 = 16\) subtract eq1 from the square area to get shaded part = \(16 - 2\pi +4\) = \(12 - 2\pi\)
Re: Two semi-circles are drawn on adjacent sides of a square with side len[#permalink]
Show Tags
23 Jul 2019, 08:50
2
I wasn't sure how to solve for the section between the two circles which I knew needed to be added back in, so I approximated. The area of the square would be 16. The shaded section is approximately only slightly less than half the area, which would be 8. π ~= 3
A. 12−π 12 - 3 = 9 - This is more than half
B. 12−2π 12 - 2*3 = 6 - This seems to be the closest answer
C. 12+π 12 + 3 = 16 - This is basically the size of the square so cannot be the answer
D. 12+2π 12+2*3 = 18 - This is larger than the area of the square so cannot be the answer
Re: Two semi-circles are drawn on adjacent sides of a square with side len[#permalink]
Show Tags
23 Jul 2019, 08:55
1
Image Two semi-circles are drawn on adjacent sides of a square with side length 4 as shown above. What is the area of the shaded region?
To answer the question you need to deduct the area of the Two semi-circles from the area of the Square. Area of the whole square is 4*4 = 16 The area of the semi-circle is Pie/2 *R^2, R is equal to two. This is going to be multiplied by 2 as there are 2 semi-circles, but then we are counting the part of the semi-circle which is common to both twice, hence we need to deduct that.
Re: Two semi-circles are drawn on adjacent sides of a square with side len[#permalink]
Show Tags
23 Jul 2019, 09:00
1
Area of the square - 4*4=16 Area of the 2 semi circles = (2*π*2*2)/2 = 4π Area of the common segment between 2 semi circles = (Area of 4 semi circles - Area of the square)/4 <when 4 semicircles are made within the square 4 common segments similar to the one in the diagram are created and leave zero uncovered area> = ((4*π*2*2)/2 - 16)/4 = (8π - 16)/4 = 2π-4
Area of shaded region = Area of square - (area of 2 semicircle - area of the common segment) = 16-(4π-(2π-4) = 16-4π+2π-4 = 12-2π
Two semi-circles are drawn on adjacent sides of a square with side length 4 as shown above. What is the area of the shaded region?
This question is a bit tricky. We have a square with side length of 4 and two semi-circles in it. We need to find the area of space not occupied by semicircles. Let us first of all, divide our square into 4 equal pieces and numerate them 1 to 4 from upper and left one to lower and right one. Let us look at these small circles one by one. First of all, the area of such a small circle is \(S=(a/2)^2=(4/2)^2=4\) Also, we might notice that the diameter of a semicircle is equal to the side of a bigger square. Thus, \(r=d/2=a/2=2\) The area of semicircle is \(S=(pi*r^2)/2=(pi*2^2)/2=2*pi\)
Now, let us have a closer look at these squares: 1) Upper left square. We can see that there is a half of semicircle in this small square. The area of the half of semicircle is \(S=2*pi/2=pi\) So the area of the shaded region will be \(area of small square - area of half of semicircle = 4 - pi\)
2) Upper right square This small square does not have its area covered by a semicircle, thus its all area is shaded and is equal to 4
3) Lower left square This square is covered by 2 halves of semicircles and we can notice that it does not have any area shaded. Thus, its shaded area is 0.
4) Lower right square Here, as in the first example, the shaded area is equal to \(area of small square - area of half of semicircle = 4 - pi\)
Adding all these areas together we get: \((4 - pi) + 4 + 0 + (4 - pi) = 12 - 2*pi\)
Re: Two semi-circles are drawn on adjacent sides of a square with side len[#permalink]
Show Tags
23 Jul 2019, 09:13
2
Two semi-circles are drawn on adjacent sides of a square with side length 4 as shown above. What is the area of the shaded region?
Solution: If we draw 4 semi-cycles on each one of the sides, then all the semi-cycles will meet/intersect in the center(crossing point of the square's diagonals). So (the area of the square) = 4 * (each semi-cycle area) - 4 * (area of the each eye-shaped region) => 4^2= (4 * π*((4/2)^2)/2) - 4 * (area of the each eye-shaped region) => 4= (π*(2^2)/2) - (area of the each eye-shaped region) => (area of the each eye-shaped region) = 2π -4
So finally, the area of the shaded region = (the area of the square) - 2 * (each semi-cycle area) + (area of the each eye-shaped region) = 4^2 -2 * π*((4/2)^2)/2 + (area of the each eye-shaped region) the area of the shaded region = 16 -4π + (2π -4) = 12 -2π
A. 12−π B. 12−2π --> correct C. 12+π D. 12+2π E. 24−4π
Re: Two semi-circles are drawn on adjacent sides of a square with side len[#permalink]
Show Tags
23 Jul 2019, 09:21
1
IMO answer is B:
Instead of doing long calculations, I tried to approximate the area. As both the circles are semicircles, they will intersect at the center of the square. so the top right part is slightly greater than 1/4th of the area of the square+ some more coming from the left over area after semicircles intersect 1/4th of the area of square is 4 units + approx. 1 unit from left semi circle + 1 unit from below semi circle = ~6units.
None of the answer choices have this value except B, approximating pi to 3.14. |
https://doi.org/10.1351/goldbook.H02732
The equation in the form: \[\log _{10}(\frac{k}{k_{0}}) = \rho \ \sigma \] or \[\log _{10}(\frac{K}{K_{0}}) = \rho \ \sigma \] applied to the influence of
meta- or para-substituents X on the reactivity of the @F02555@ Y in the benzene derivative m- or p-XC 6H 4Y. \(k\) or \(K\) is the rate or @E02177@, respectively, for the given reaction of m- or p-XC 6H 4Y; \(k_{0}\) or \(K_{0}\) refers to the reaction of C 6H 5Y, i.e. X = H; is the substituent constant characteristic of m- or p-X: is the reaction constant characteristic of the given reaction of Y. The equation is often encountered in a form with \(\log _{10}k_{0}\) or \(\log _{10}K_{0}\) written as a separate term on the right hand side, e.g. \[\log _{10}k = \rho \ \sigma +\log _{10}k_{0}\] or \[\log _{10}K = \rho \ \sigma +\log _{10}K_{0}\] It then signifies the intercept corresponding to X = H in a regression of \(\log _{10}k\) or \(\log _{10}K\) on \(\sigma \). See also:
,
ρ-value
σ-constant,
Taft equation,
Yukawa–Tsuno equation |
I have multiple regression with, say 3 independent variables: $Y=B_0+B_1x_1+B_2x_2+B_3x_3$ I would like to test if $B_2+3B_3$ is significantly different from zero, i.e. $$H_0: B_2+3B_3=0$$ $$H_1: B_2+3B_3\neq 0$$ Can you please help to find appropriate way to test for significance of linear functions of two coefficients as in above example. Many thanks in advance.
If your errors are normal and regressors are non-random, the OLS estimates of the coefficients are normal:
$$\hat\beta-\beta\sim N(0,\sigma^2(X'X)^{-1})$$
Hence any linear combination is normal too:
$$R\hat(\beta-\beta)\sim N(0, R\sigma^2(X'X)^{-1}R')$$
You want to test that $R\beta=r$, with $R$ being $[0,0,1,3]$ and $r=0$. The Wald statistic for testing the null hypothesis $R\beta=r$ is
$$(R\hat\beta-r)(R\sigma^2(X'X)^{-1}R')^{-1}(R\hat\beta-r)\sim \chi^2_q,$$
where $q$ is the rank of $R$, which in your case is simply 1. You have unknown $\sigma^2$, simply plug in the consistent estimate and you are good to go.
This statistic is implemented in practically all the statistical packages which estimate linear regression. In R you need to use function
linearHypothesis from package
car.
Your intuition is correct that you cannot just simply add together the estimates of the two parameters. Luckily as @caburke suggests in his comment this is a very standard application of regression and there is a way to do this. The key words to search for are linear combination of estimates from linear regression or (mysteriously) "contrasts".
Given your assumptions, your linear combination of estimates will itself have a t distribution, with standard error equal to
$ s\sqrt{b^t(X^tX)^{-1}b}$
Where b is the vector indicating your linear combination of coefficients you are interested in (in your case, [0,0,1,3]); X is your original matrix of explanatory data (including a column of 1s for the intercept) and $s^2$ is the estimated residual variance.
Most stats software will have a way of doing all of this linear algebra for you.
There are doubtless packages in R (eg the 'contrast' package) that have this conveniently wrapped up if you don't want to do it by hand. A nice little basic function that does it in R is available here: https://notendur.hi.is/thor/TLH2010/Fyrirlestrar/Kafli4/lincomRv8.R. Sorry, I can't identify the author of it, but for the record (in case the link goes down) here is the code:
# A function to estimate a linear combination of parameters from a linear model along# with the standard error of such a combination.# lm.result (or model.result) is the result from lm or glm. # contrast.est is the estimate.# contrast.se is the standard error.lincom <- function(model.result,contrast.vector,alpha=0.05) {beta.coef <- coef(model.result)[1:length(contrast.vector)]dispersion.param <- summary.lm(model.result)$sigma beta.cov <- dispersion.param^2*summary(model.result)$cov.unscaled[1:length(contrast.vector),1:length(contrast.vector)]df.error <- summary(model.result)$df[2]contrast.est <- c(t(contrast.vector) %*% beta.coef)contrast.se <- sqrt(c(t(contrast.vector) %*% beta.cov %*% contrast.vector))tvalue <- contrast.est/contrast.selowerb <- contrast.est - qt(1-alpha/2,df.error) * contrast.seupperb <- contrast.est + qt(1-alpha/2,df.error) * contrast.sepvalue <- 2*(1-pt(abs(tvalue),df.error))return(list(contrast.est=contrast.est,contrast.se=contrast.se,lower95CI=lowerb,upper95CI=upperb,tvalue=tvalue,pvalue=pvalue))} |
Matrix Mechanics
In this lesson, we'll cover some of the fundamental principles and postulates of quantum mechanics. These principles are the foundation of quantum mechanics.
The eigenvalues are the values that you measure in an experiment: for example, the position or momentum of a particle. Because the eigenvalues are what you measure, it wouldn't make physical sense if the eigenvalue of an observable had an imaginary part. In this lesson, we'll prove that the eigenvalue of any observable is a real number.
The three operators—\(\hat{σ}_x\), \hat{σ}_y\), and \hat{σ}_z\)—are associated with the measurements of the \(x\), \(y\), and \(z\) components of spin of a quantum particle, respectively. In this lesson, we'll represent each of these three operators as matrices and solve for the entries in each matrix. These three matrices are called the Pauli matrices.
In this lesson, we'll derive an equation which will allow us to calculate the wavefunction (which is to say, the collection of probability amplitudes) associated with any ket vector \(|\psi⟩\). Knowing the wavefunction is very important since we use probability amplitudes to calculate the probability of measuring eigenvalues (i.e. the position or momentum of a quantum system).
In this lesson, we'll mathematically prove that for any Hermitian operator (and, hence, any observable), one can always find a complete basis of orthonormal eigenvectors.
Schrodinger's Equation
The wavefunction \(\psi(L,t)\) is confined to a circle whenever the eigenvalues L of a particle are only nonzero on the points along a circle. When the wavefunction \(\psi(L,t)\) associated with a particle has non-zero values only on points along a circle of radius \(r\), the eigenvalues \(p\) (of the momentum operator \(\hat{P}\)) are quantized—they come in discrete multiples of \(n\frac{ℏ}{r}\) where \(n=1,2,…\) Since the eigenvalues for angular momentum are \(L=pr=nℏ\), it follows that angular momentum is also quantized.
Newton's second law describes how the classical state {\(\vec{p_i}, \vec{R_i}\)} of a classical system changes with time based on the initial position and configuration \(\vec{R_i}\), and also the initial momentum \(\vec{p_i}\). We'll see that Schrodinger's equation is the quantum analogue of Newton's second law and describes the time-evolution of a quantum state \(|\psi(t)⟩\) based on the following two initial conditions: the energy and initial state of the system.
In this section, we'll begin by seeing how Schrodinger's time-independent equation can be used to determine the wave function of a free particle. After that, we'll use Schrodinger's time-independent equation to solve for the allowed, quantized wave functions and allowed, energy eigenvalues of a "particle in a box"; this will be useful later on as a qualitative understanding of the quantized wave functions and energy eigenvalues of atoms.
In general, if a quantum system starts out in any arbitrary state, it will evolve with time according to Schrödinger's equation such that the probability \(P(L)\) changes with time. In this lesson, we'll prove that if a quantum system starts out in an energy eigenstate, then the probability \(P(L)\) of measuring any physical quantity will not change with time. |
Faculty of Mathematics and Computer Science, Damghan University, Damghan, Iran.
Receive Date: 19 November 2014,Revise Date: 24 July 2015,Accept Date: 30 July 2015
Abstract
Let $A$ be a $C^{*}$ algebra, $T: A\rightarrow A$ be a linear map which satisfies the functional equation $T(x)T(y)=T^{2}(xy),\;\;T(x^{*})=T(x)^{*} $. We prove that under each of the following conditions, $T$ must be the trivial map $T(x)=\lambda x$ for some $\lambda \in \mathbb{R}$: i) $A$ is a simple $C^{*}$-algebra. ii) $A$ is unital with trivial center and has a faithful trace such that each zero-trace element lies in the closure of the span of commutator elements. iii) $A=B(H)$ where $H$ is a separable Hilbert space. For a given field $F$, we consider a similar functional equation {$ T(x)T(y) =T^{2}(xy), T(x^{tr})=T(x)^{tr}, $} where $T$ is a linear map on $M_{n}(F)$ and "tr" is the transpose operator. We prove that this functional equation has trivial solution for all $n\in \mathbb{N}$ if and only if $F$ is a formally real field. |
I have trouble understanding the Lorentz transformation to proof the
dilation of time.
Let's use finite differences instead and, further, the entire expression for $\Delta t'$ from the Lorentz transformation
$$\Delta t' = \gamma \left(\Delta t - \frac{v}{c^2}\Delta x \right) = \frac{\Delta t - \frac{v}{c^2}\Delta x}{\sqrt{1 - \frac{v^2}{c^2}}}$$
where $v$ is the
relative speed of the primed and unprimed systems.
Now, assume $\Delta x = 0$ which means that the two events are
co-located in the unprimed system. So, for example, this would be the case for a clock at rest in the unprimed system. It follows that $\Delta t$ is, in this case, the elapsed time according to a clock at rest in the unprimed system.
Now, this clock at rest in the unprimed system has speed $v$ in the primed system thus,
$\Delta t$ is the elapsed time according to a clock moving with speed$v$ in the primed system.
Then, according to the equation above
$$\Delta t' = \frac{\Delta t - \frac{v}{c^2}\cdot 0}{\sqrt{1 - \frac{v^2}{c^2}}} = \frac{\Delta t}{\sqrt{1 - \frac{v^2}{c^2}}}$$
$\Delta t$ is
smaller than the elapsed time according to clocks at rest in the primed system.
Once again, $\Delta t$ is the elapsed time according to a clock moving with speed $v$ in the primed system and, according to clocks at rest in the primed system, this elapsed time is
less than the elapsed time in the primed system.
In other words,
moving clocks run slower than clocks at rest. This is time dilation (due to uniform relative motion). |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Why are axial bonds are longer than equatorial bond in case of $\mathrm{sp^3d}$ hybridization? I have done some research but I can't seem to find the answer.
You are asking for $\mathrm{sp^3d}$ hybridisation, but I do not know of a case where $\mathrm{sp^3d}$ hybridisation actually happens. Either it does not make sense to discuss hybridisation at all (the iron in pentacarbonyliron) or the hybridisation is actually not $\mathrm{sp^3d}$ but $\mathrm{sp^2 + p}$ (the phosphorus in $\ce{PCl5}$). I shall discuss the latter, because I believe it could be the one you are on about.
Phosphorus pentachloride is often drawn like in figure 1 below for simplicity. This depiction implies five identical covalent 2-electron-2-centre bonds. However, that does not agree with the octet rule.
Figure 1: Simplified structure of $\ce{PCl5}$.
Instead, one can draw a set of two mesomeric structures, each one conforming with the octet rule — see figure 2.
Figure 2: Mesomeric structures of $\ce{PCl5}$ conforming to the octet rule.
This already hints us towards the answer: Rather than assuming two bonds which are equal to the other three bonds we need to consider three ‘classical’ 2-electron-2-centre bonds to the equatorial chlorines and one 4-electron-3-centre bond to the two axial chlorines. This bond’s order is $0.5$ rather than $1$. Typically, the lower the bond order the weaker a bond and therefore the greater the bond length is — the experiment is in fine agreement with theory.
But no bonding discussion is truly complete without an orbital consideration. Check out the three collinear p-orbitals of phosphorus and the two chlorines, that together form three molecular orbitals labelled $\Psi_1$ to $\Psi_3$ in figure 3.
Figure 3: Representation of the three molecular orbitals that form the 4-electron-3-centre bond.
As you can see, the lowest MO $\Psi_1$ is bonding with respect to both $\ce{P-Cl}$ bonds. The highest MO $\Psi_3$ is antibonding with respect to both. And the middle one is bonding with respect to $\ce{Cl-Cl}$ but nonbonding if we add the phosphorus atom. We need to fill in four electrons into these three orbitals ($\ce{PCl3}$ has one lone pair, and we are effectively using the bond of a $\ce{Cl-Cl}$ molecule as our second electron pair). Thus, the bonding and the nonbonding orbitals are filled. Bond order can now be calculated by:
$$\text{bond order} = \frac{(\text{electrons in bonding orbitals}) - (\text{electrons in antibonding orbitals})}{\text{number of bonds}\times 2}\\= \frac{2-0}{2 \times 2} = 0.5$$
Again, a lower bond order typically correlates with a greater bond length.
I think it's because the equatorial bonds lie on the same plane and so would be of equivalent length but the axial bonds experience repulsion from the equatorial bonds and as a result try to move away as far as possible so as to minimize repulsion. Thus, the axial bonds would be longer. Hope this helps :) |
Direct answer to the question: yes, there are esoteric and highly impractical PLs based on $\mu$-recursive functions (think Whitespace), but no practical programming language is based on $\mu$-recursive functions due to valid reasons.
General recursive (i.e., $\mu$-recursive) functions are significantly less
expressive than lambda calculi. Thus, they make a poor foundation for programming languages. You are also not correct that the TM is the basis of imperative PLs: in reality, good imperative programming languages are much closer to $\lambda$-calculus than they are to Turing machines.
In terms of computability, $\mu$-recursive functions, Turing machine, and the untyped $\lambda$-calculus are all equivalent. However, the untyped LC has good properties that none of the other two have. It is very simple (only 3 syntactic forms and 2 computational rules), is highly compositional, and can express programming constructs relatively easily. Moreover, equipped with a simple type system (e.g., System $F\omega$ extended with $\mathsf{fix}$), the $\lambda$-calculus can be extremely expressive in that it can express many complex programming constructs easily, correctly and compositionally. You can also extend the $\lambda$-calculus easily to include constructs that are not lambdas. None of the other computational models mentioned above give you those nice properties.
The Turing machine is neither compositional nor universal (you need to have a TM for each problem). There are no concepts of "functions", "variables" or "composition". It is also not exactly true that TMs are the basis of imperative PLs - FWIW, imperative PLs are much, much closer to lambda calculi with control operators than to Turing machines. See Peter J. Landin's "A Correspondence Between ALGOL 60 and Church's Lambda-Notation" for a detailed explanation. If you have programmed in Brainf**k (which actually implements a rather simple Turing machine), you will know that Turing machines are not a good idea for programming.
$\mu$-recursive functions are similar to TMs in this respect. They are compositional, but not nearly as compositional as the LC. You also just can't encode useful programming constructs in $\mu$-recursive functions. Moreover, the $\mu$-recursive functions only compute over $\mathbb{N}$, and to compute over anything else you'd need to encode your data into natural numbers using some sort of Gödel numbering, which is painful.
So, it is not a coincidence that most programming languages are somehow based off the $\lambda$-calculus! The $\lambda$-calculus has good properties: expressiveness, compositionality and extensibility, that other systems lack. However, Turing machines are good for studying computational complexity, and $\mu$-recursive functions are good for studying the logical notion of computability. They both have outstanding properties that the $\lambda$-calculus lacks, but in the field of programming $\lambda$-calculus clearly wins.
In fact, there are many, many more Turing complete systems out there, but they lack any outstanding property whatsoever. Conway's Game of Life, LaTeX macros, and even (some claim) DNA are all Turing complete, but no one programs (i.e. do serious programming) with Conway or studies computational complexity using LaTeX macros. They simply lack good properties. Turing complete
per se is nearly meaningless when it comes to programming.
Also, many non-Turing complete computational systems are very useful when it comes to programming. Regular expressions and yacc are not Turing complete, but they are extremely powerful in solving a certain class of problems. Coq is also not Turing complete, but it is incredibly powerful (it's actually considered much more
expressive than its Turing complete cousin, OCaml). When it comes to programming, Turing completeness is not the key, as many (close to) useless systems are uninterestingly Turing complete. You're not going to claim that Brainf**k or Whitespace are more powerful programming languages than Coq, are you? An expressive foundation is the key to powerful programming languages, and that's why modern programming languages are almost always based on the $\lambda$-calculus. |
Though less used than Nuclear Magnetic Resonance (NMR), Electron Paramagnetic Resonance (EPR) is a remarkably useful form of spectroscopy used to study molecules or atoms with an unpaired electron. It is less widely used than NMR because stable molecules often do not have unpaired electrons. However, EPR can be used analytically to observe labeled species in situ either biologically or in a chemical reaction.
Introduction Electron Paramagnetic Resonance (EPR), also known as Electron Spin Resonance (ESR) . The sample is held in a very strong magnetic field, while electromagnetic (EM) radiation is applied monochromatically (Figure 1).
Figure 1
(3)-monochromatic electromagnetic beam
This portion of EPR is analogous to simple spectroscopy, where absorbance by the sample of a single or range of wavelengths of EM radiation is monitored by the end user ie absorbance. The unpaired electrons can either occupy +1/2 or -1/2 m
s value (Figure 2). From here either the magnetic field "B 0" is varied or the incident light is varied. Today most researchers adjust the EM radiation in the microwave region, the theory is the find the exact point where the electrons can jump from the less energetic m s=-1/2 to m s=+1/2. More electrons occupy the lower m s value (see Boltzmann Distribution).
Figure 2: Resonance of a free electron.
Overall, there is an absorption of energy. This absorbance value, when paired with the associated wavelength can be used in the equation to generate a graph of showing how absorption relates to frequency or magnetic field.
\[ \Delta E=h\nu=g_e \beta_B B_0 \]
where
g e equals to 2.0023193 for a free electron; \( \beta_B\) is the Bohr magneton and is equal to 9.2740 * 10 -24 J T -1; and B 0 indicates the external magnetic field. Theory
Like NMR, EPR can be used to observe the geometry of a molecule through its magnetic moment and the difference in electron and nucleus mass. EPR has mainly been used for the detection and study of free radical species, either in testing or anylytical experimentation. "Spin labeling" species of chemicals can be a powerfull technique for both quantification and investigation of otherwise invisible factors.
The EPR spectrum of a free electron, there will be only one line (one peak) observed. But for the EPR spetrum of hydrogen, there will be two lines (2 peaks) observed due to the fact that there is interaction between the nucleus and the unpaired electron. This is also called
. The distance between two lines (two peaks) are called hyperfine splitting . hyperfine splitting constant (A)
By using (2NI+1), we can calculate the components or number of hyperfine lines of a multiplet of a EPR transtion, where N indicates number of spin, I indicates number of equivalent nuclei. For example, for nitroxide radicals, the nuclear spin of 14N is 1, N=1, I=1, we have 2 x 1 + 1 = 3, which means that for a spin 1 nucleus splits the EPR transition into a triplet.
To absorb microwave, there must be unpaired electrons in the system. no EPR signal will be observed if the system contains only paired electrons since there will be no resonant absorption of microwave energy. Molecules such as NO, NO
2, O 2 do have unpaired electrons in groud states. EPR can be also performed on proteins with paramagnetic ions such as Mn 2 +, Fe 3 + and Cu 2 +. Additionally, molecules containing stable nitroxide radicals such as 2,2,6,6-tetramethyl-1-piperidinyloxyl (TEMPO, Figure 3) and di- tert-butyl nitroxide radical.
Figure 3-The nitroxide radical TEMPO
Examples of EPR spectra:
Figure 4 -Stimulated EPR spectrum of CH
3 radical
Figure 5 - Stimulated EPR spectrum of methoxymethl ( H
2C(OCH 3) )radical References G. Maksina, Yu. S. Arkhangel'skaya and B. A. Dainyak. Russian State Medical University, Moscow. Translated from Meditsinskaya Tekhnika, No. 5, pp. 32–34, September–October, 1995. http://www.springerlink.com/content/n73312406t6941l4/fulltext.pdf E.F. Block. The Role in Coherent Resonance in Human Fair: Part One-Electromagnetic and Gravity. December 2010. http://journalinformationalmedicine.org/cr1.htm Thomas Engel, Gary Drobny and Philip Reid. Physical Chemistry for the Life Science.s 2008 Pearson Education, Inc. Upper Saddle River, NJ 07458. pp. 514-516 Cortel, Adolf. "Demonstrations on Paramagnetism with an Electronic Balance." J. Chem. Educ. 1998 75 61. Geselbracht, Margaret J.; Cappellari, Ann M.; Ellis, Arthur B.; Rzeznik, Maria A.; Johnson, Brian J. "Rare Earth Iron Garnets: Their Synthesis and Magnetic Properties." J. Chem. Educ. 1994, 71, 696. Shimada, Hiroshi; Yasuoka, Takashi; Mitsuzawa, Shunmei. "Observation of paramagnetic property of oxygen by simple method: A simple experiment for college chemistry and physics courses (TD)." J. Chem. Educ. 1990, 67, 63. |
Increasing the amount of installed renewable energy sources such as solar and wind is an essential step towards the decarbonization of the energy sector.
From a technical point of view, however, the stochastic nature of distributed energy resources (DER) causes operational challenges. Among them, unbalance between production and consumption, overvoltage and overload of grid components are the most common ones.
As DER penetration increases, it is becoming clear that incentive strategies such as Net Energy Metering (NEM) are threatening utilities, since NEM doesn’t reward prosumers to synchronize their energy production and demand.
In order to reduce congestions, distributed system operators (DSOs) currently use a simple indirect method, consisting of a bi-level energy tariff, i.e. the price of buying energy from the grid is higher than the price of selling energy to the grid. This encourages individual prosumers to increase their self-consumption. However, this is inefficient in regulating the aggregated power profile of all prosumers.
Utilities and governments think that a better grid management can be achieved by making the distribution grid ‘smarter’, and they are currently deploying massive amount of investments to enforce this vision.
As I explained in my previous post on the need of decentralized architectures for new energy markets, the common view of the scientific community is that a smarter grid requires an increase in the amount of communication between generators and consumers, adopting near real-time markets and dynamic prices, which can steer users’ consumption during periods in which DER energy production is higher, or increase their production during high demand. For example, in California a modification of NEM that allows prosumers to export energy from their batteries during evening peak of demand has been recently proposed.
But as flexibility will be offered at different levels and will provide a number of services, from voltage control for the DSOs to control energy for the transmission system operators (TSOs), it is important to make sure that these services will not interfere with each other. So far, a comprehensive approach towards the actuation of flexibility as a system-wide leitmotiv, taking into account the effect of DR at all grid levels, is lacking.
In order to optimally exploit prosumers’ flexibility, new communication protocols are needed, which coupled with a sensing infrastructure (smart meters), can be used to safely steer aggregated demand in the distribution grid, up to the transmission grid.
The problem of coordinating dispatchable generators is well known by system operators and has been studied extensively in the literature. When not taking into account grid constraints, this is known under the name of
economic dispatch, and consists in minimizing the generation cost of a group of power plants . When operational constraints are considered, the problem increases in complexity, due to the power flow equations governing currents and voltages in the electric grid. Nevertheless, several approaches are known for solving this problem, a.k.a. optimal power flow (OPF), using approximations and convex formulations of the underlying physics. OPF is usually solved in a centralized way by an independent system operator (ISO). Anyway, when the number of generators increases, as in the case of DERs, the overall problem increases in complexity but can be still effectively solved by decomposing it among generators.
The decomposition has other two main advantages over a centralized solution, apart from allowing faster computation. The first is that generators do not have to disclose all their private information in order for the problem to be solved correctly, allowing competition among the different generators. The second one is that the computation has no single point of failure.
In this direction, we have recently proposed a multilevel hierarchical control which can be used to coordinate large groups of prosumers located at different voltage levels of the distribution grid, taking into account grid constraints. The difference between power generators and prosumers is that the latter do not control the time of generated power, but can operate deferrable loads such as heat pumps, electric vehicles, boilers and batteries.
The idea is that prosumers in the distribution grid can be coordinated only by means of a price signal sent by their parent node in the hierarchical structure, an aggregator. This allows the algorithm to be solved using a
forward-backward communication protocol. In the forward passage each aggregator receives a reference price from its parent node and sends it downwards, along to its reference price, to its children nodes (prosumers or aggregators), located in a lower hierarchy level. This mechanism is propagated along all the nodes, until the terminal nodes (or leafs). Prosumers in leaf nodes solve their optimization problems as soon as they are reached by the overall price signal. In the backward passage, prosumers send their solutions to their parents, which collect them and send the aggregated solution upward.
Apart from this intuitive coordination protocol, the proposed algorithm has other favorable properties. One of them is that prosumers only need to share information on their energy production and consumption with one aggregator, while keeping all other parameters and information private. This is possible thanks to the decomposition of the control problem. The second property is that the algorithm exploits parallel computation of the prosumer specific problems, ensuring minimum overhead communication.
However, being able to coordinate prosumers is not enough.
The main difference between the OPF and DR problem, is that the latter involves the participation of self-serving agents, which cannot be a-priori trusted by an independent system operator (ISO). This implies that if an agent find it profitable (in terms of its own economic utility), he will compute a different optimization problem from the one provided by the ISO. For this reason, some aspects of DR formulations are better described through a game theoretic framework.
Furthermore, several studies have focused on the case in which grid constraints are enforced by DSOs, directly modifying voltage angles at buses. Although this is a reasonable solution concept, the current shift of generation from the high voltage network to the low voltage network lets us think that in the future
prosumers and not DSOs could be in charge of regulating voltages and mitigating power peaks.
With this in mind, we focused on analyzing the decomposed OPF using game theory and mechanism design, which study the behavior and outcomes of a set of agents trying to maximize their own utilities $latex u(x_i,x_{-i})&s=1$, which depend on their own actions $latex x_i &s=1$ and on the action of the other agents $latex x_{-i}&s=1$, under a given ‘mechanism’. The whole field of mechanism design tries to escape from the Gibbard–Satterthwaite theorem, which can be perhaps better understood by means of its corollary:
If a strict voting rule has at least 3 possible outcomes, it is non-manipulable if and only if it is dictatorial.
It turns out, that the only way to escape from this impossibility result, is adopting money transfer. As such, our mechanism must define both an allocation rule and a taxation (or reward) rule. In this way, the overall value seen by the agents is equal to their own utility augmented by the taxation/remuneration imposed by the mechanism:
$latex v_i (x_i,x_{-i})= u_i(x_i,x_{-i}) + c_i(x_i,x_{-i}) &s=1$
Anyway, monetary transfers are as powerful as perilous. When designing taxes and incentives, one should always keep in mind two things:
Designing wrong incentives could result in spectacular failures, as we learned from the case of a very anecdotal misuse of incentives from British colonial history, known as the cobra effect If there is a way to fool the mechanism, self-serving prosumers will almost surely find it out. Know that some people will do everything they can to game the system, finding ways to win that you never could have imagined― Steven D. Levitt
A largely adopted solution concept, used to rule out most of the strategic behaviors from agents (but not the same as strategyproof mechanism), is the one of ex-post Nash Equilibrium (NE), or simply equilibrium, which is reached when the following set of problems are jointly minimized:
$latex
\begin{aligned} \min_{x_i \in \mathcal{X}_i} & \quad v(x_i, x_{-i}) \quad \forall i \in \{N\} \\ s.t. & \quad Ax\leq b \end{aligned}&s=1 $
where $latex x_i \in \mathcal{X}_i &s=1$ means that the agents’ actions are constrained to be in the set $latex \mathcal{X}_i &s=1$, which could include for example the prosumer’s battery maximum capacity or the maximum power at which the prosumer can draw energy from the grid. The linear equation in the second row represents the grid constraints, which is a function of the actions of all the prosumers, $latex x = [x_i]_{i=1}^N &s=1$, where N is the number of prosumers we are considering.
Rational agents will always try to reach a NE, since in this situation they cannot improve their values given that the other prosumers do not change their actions.
Using basic optimization notions, the above set of problems can be reformulated using KKT conditions, which under some mild assumptions ensure that the prosumers’ problems are optimally solved. Briefly, we can augment the prosumers objective function using a first order approximation, through a Lagrangian multiplier $latex \lambda_i$, of the coupling constraints and using the indicator function to encode their own constraints:
$latex \tilde{v}_i (x_i,x_{-i}) = v_i (x_i,x_{-i}) + \lambda_i (Ax-b) + \mathcal{I}_{\mathcal{X}_i} &s=1$
The KKT conditions now reads
$latex
\begin{aligned} 0& \in \partial_{x_i} v_i(x_i,\mathrm{x}_{-i}) + \mathrm{N}_{\mathcal{X}_i} + A_i^T\lambda \\ 0 & \leq \lambda \perp -(Ax-b) \geq 0 \end{aligned} &s=1 $
where $latex \mathrm{N}_{\mathcal{X}_i}&s=1$ is the normal cone operator, which is the sub-differential of the indicator function.
Loosely speaking, Nash equilibrium is not always a reasonable solution concept, due to the fact that multiple equilibria usually exists. For this reasons equilibrium refinement concepts are usually applied, in which most of the equilibria are discarded a-priori. Variational NE (VNE) is one of such refinement. In VNE, the price of the shared constraints paid by each agent is the same. This has the nice economic interpretation that all the agents pay the same price for the common good (the grid). Note that we have already considered all the Lagrangian multiplier as equal $latex \lambda_i = \lambda \quad \forall i \in \{N\}&s=1$ in writing the KKT condition.
One of the nice properties of the VNE is that for well behaving problems, this equilibrium is unique. Being unique, and with a reasonable economic outcome (price fairness), rational prosumers will agree to converge to it, since at the equilibrium no one is better off changing his own actions while the other prosumers’ actions are fixed. It turns out that a trivial modification of the parallelized strategy we adopted to solve the multilevel hierarchical OPF can be used to reach the VNE.
On top of all this, new economic business models must be actuated in order to reward prosumers for their flexibility. In fact, rational agents would not participate in the market if the energy price they pay is higher than what they pay to their current energy retailer. One of such business models is the aforementioned Californian proposal to enable NEM with the energy injected by electrical batteries.
Another possible use case is the creation of an self-consumption community, in which a group of prosumers in the same LV grid, pays only at the point of common coupling with the grid of the DSO (which e.g. could be the LV/MV transformer in figure 1). In this way, if the group of prosumers is heterogeneous (someone is producing energy while someone else is consuming), the overall cost that they pay as a community will be always less than what they would have paid as single prosumers, at the loss of the DSO. But if this economic surplus drives the prosumers to take care of power quality in the LV/MV, the DSO could benefit from this business model, delegating part of its grid regulating duties to them.
How does blockchain fits in? Synchronizing thousands of entities connected to different grid levels is a technically-hard task. Blockchain technology can be used as a thrust-less distributed database for creating and managing energy communities of prosumers willing to participate to flexibility markets. On top of the blockchain, off-chain payment channels can be used to keep track of the energy consumed and produced by prosumers and to disburse payments in a secure and seamless way.
Different business models are possible, and technical solutions as well. But
we think that in the distribution grid, the economic value lies in shifting the power production and consumption of the prosumers, enabling a really smarter grid. At Hive Power we are enabling the creation of energy sharing communities where all participants are guaranteed to benefit from the participation, reaching at the same time a technical and financial optimum for the whole community. Key links: |
Let us define the auxiliary process $\Lambda_t=e^{\kappa t}\lambda_t$. Note that:
$$ \Lambda_t = \kappa e^{\kappa t} \int_0^t(\rho_s-\lambda_s)ds+\delta e^{\kappa t}\int_0^tdN_t$$
Hence after a jump occurs at $t$:
$$ \Lambda_t=\Lambda_{t-}+\delta e^{\kappa t}$$
Therefore by Ito's lemma for jump-diffusion processes:
$$ \begin{align}d\Lambda_t & = \frac{\partial \Lambda_t}{\partial t}dt+\frac{\partial \Lambda_t}{\partial \lambda_t}\kappa(\rho_t-\lambda_t)dt+(\Lambda_t-\Lambda_{t-})dN_t\\[9pt]& = \kappa e^{\kappa t}\rho_tdt+\delta e^{\kappa t}dN_t\end{align}$$
Integrating:
$$ \Lambda_t=\Lambda_0+\kappa\int_0^te^{\kappa s}\rho_sds+\delta\int_0^te^{\kappa s}dN_s$$
Finally:
$$ \lambda_t=\lambda_0+\kappa\int_0^te^{\kappa (s-t)}\rho_sds+\delta\int_0^te^{\kappa (s-t)}dN_s$$
You notice that in the original SDE the following factor is the "nuisance":
$$d\lambda_t = \cdots + \left(-\kappa \lambda_t dt\right) + \cdots$$
which corresponds to the differential of an exponential with constant $\kappa$:
$$ dx_t = \kappa x_tdt \quad \Leftrightarrow \quad x_t = Ce^{\kappa t}$$
Hence you need to try to get rid of it by making a $+\kappa \lambda_t dt$ appear somehow, which can be achieved by differentiating an exponential with constant $\kappa$ through Ito's lemma applied to $\Lambda_t$ as defined above. |
Is there a nice closed form expression for $\mathbb{E}_{\theta' \sim Dir(\alpha)} KL (Cat(x; \theta)|| Cat(x;\theta')$, where $Dir(\alpha)$ is the Dirichlet distribution with concentration parameters $\alpha$ and $Cat(x;\theta)$ is the discrete distribution with (log-)parameters $\theta$?
It turns out I can answer this myself with some more effort. Write the inner KL expectation out explicitly, and it turns into the (negative) entropy of the categorical plus the cross-entropy between the categorical and the expectation of the log-Dirichlet. There's a nice closed form for the latter:
$\mathbb{E}_{X \sim Dir(\alpha)}[\log(X_i)] = \psi(\alpha_i) - \psi(\alpha_0)$ (from wikipedia).
Mafipulate a few terms and Bob's your uncle. |
On thing to keep in mind is that IR and UV divergences appear in different kinematical regimes: UV divergences are basically due to the fact that in loop integrals there are not sufficient propagators to make the integral fall off at infinity. E.g for a bubble integral
$\int d^4l \frac{1}{l^2(l-p)^2}$ will be logarithmically divergent. Do for instance a Taylor expansion of this expression for the loop momentum becoming large then this becomes obvious.
IR divergences however live in a completely different regime: they appear either when two particles becoming collinear $p_1\sim p_2$ or because some particles become soft $p_i\sim0$.
Or put a little more condensed:
UV: loop momentum becomes large
IR: external momenta become collinear/soft.
This is one way to see why these two kinds of divergence are not connected. Nima and company propably meant just this but in fancier terms.This post imported from StackExchange Physics at 2014-04-15 16:45 (UCT), posted by SE-user A friendly helper |
Consider again the quasilinear equation
(\(\star\)) \(a_1(x,y,u)u_x+a_2(x,y,u)u_y=a_3(x,y,u)\).
Let
$$ \Gamma:\ \ x=x_0(s),\ y=y_0(s),\ z=z_0(s), \ s_1\le s\le s_2,\ -\infty<s_1<s_2<+\infty $$ be a regular curve in \(\mathbb{R}^3\) and denote by \(\mathcal{C}\) the orthogonal projection of \(\Gamma\) onto the \((x,y)\)-plane, i. e., $$ \mathcal{C}:\ \ x=x_0(s),\ \ y=y_0(s). $$ Initial value problem of Cauchy: Find a \(C^1\)-solution \(u=u(x,y)\) of \((\star)\) such that \(u(x_0(s),y_0(s))=z_0(s)\), i. e., we seek a surface \(\mathcal{S}\) defined by \(z=u(x,y)\) which contains the curve \(\Gamma\). Figure 2.2.2.1: Cauchy initial value problem Definition. The curve \(\Gamma\) is said to be non-characteristic if $$ x_0'(s)a_2(x_0(s),y_0(s))-y_0'(s)a_1(x_0(s),y_0(s))\not=0. $$ Theorem 2.1. Assume \(a_1,\ a_2,\ a_2\in C^1\) in their arguments, the initial data \(x_0,\ y_0,\ z_0\in C^1[s_1,s_2]\) and \(\Gamma\) is non-characteristic. Then there is a neighborhood of \(\cal{C}\) such that there exists exactly one solution \(u\) of the Cauchy initial value problem. Proof. (i) Existence. Consider the following initial value problem for the system of characteristic equations to (\(\star\)): \begin{eqnarray*} x'(t)&=&a_1(x,y,z)\\ y'(t)&=&a_2(x,y,z)\\ z'(t)&=&a_3(x,y,z) \end{eqnarray*} with the initial conditions \begin{eqnarray*} x(s,0)&=&x_0(s)\\ y(s,0)&=&y_0(s)\\ z(s,0)&=&z_0(s). \end{eqnarray*} Let \(x=x(s,t)\), \(y=y(s,t)\), \(z=z(s,t)\) be the solution, \(s_1\le s\le s_2\), \(|t|<\eta\) for an \(\eta>0\). We will show that this set of curves, see Figure 2.2.2.1, defines a surface. To show this, we consider the inverse functions \(s=s(x,y)\), \(t=t(x,y)\) of \(x=x(s,t)\), \(y=y(s,t)\) and show that \(z(s(x,y),t(x,y))\) is a solution of the initial problem of Cauchy. The inverse functions \(s\) and \(t\) exist in a neighborhood of \(t=0\) since $$ \det \frac{\partial(x,y)}{\partial(s,t)}\Big|_{t=0}= \left|\begin{array}{cc}x_s&x_t\\y_s&y_t\end{array}\right|_{t=0} =x_0'(s)a_2-y_0'(s)a_1\not=0, $$ and the initial curve \(\Gamma\) is non-characteristic by assumption.
Set
$$ u(x,y):=z(s(x,y),t(x,y)), $$ then \(u\) satisfies the initial condition since $$ u(x,y)|_{t=0}=z(s,0)=z_0(s). $$ The following calculation shows that \(u\) is also a solution of the differential equation (\(\star\)). \begin{eqnarray*} a_1u_x+a_2u_y&=&a_1(z_ss_x+z_tt_x)+a_2(z_ss_y+z_tt_y)\\ &=&z_s(a_1s_x+a_2s_y)+z_t(a_1t_x+a_2t_y)\\ &=&z_s(s_xx_t+s_yy_t)+z_t(t_xx_t+t_yy_t)\\ &=&a_3 \end{eqnarray*} since \(0=s_t=s_xx_t+s_yy_t\) and \(1=t_t=t_xx_t+t_yy_t\).
(ii) Uniqueness. Suppose that \(v(x,y)\) is a second solution. Consider a point \((x',y')\) in a neighborhood of the curve \((x_0(s),y(s))\), \(s_1-\epsilon\le s\le s_2+\epsilon\), \(\epsilon>0\) small. The inverse parameters are \(s'=s(x',y')\), \(t'=t(x',y')\), see Figure 2.2.2.2.
Figure 2.2.2.2: Uniqueness proof
Let
$$ {\mathcal{A}}:\ \ x(t):=x(s',t),\ y(t):=y(s',t),\ z(t):=z(s',t) $$ be the solution of the above initial value problem for the characteristic differential equations with the initial data $$ x(s',0)=x_0(s'),\ y(s',0)=y_0(s'),\ z(s',0)=z_0(s'). $$ According to its construction this curve is on the surface \(\mathcal{S}\) defined by \(u=u(x,y)\) and \(u(x',y')=z(s',t')\). Set $$ \psi(t):=v(x(t),y(t))-z(t), $$ then \begin{eqnarray*} \psi'(t)&=&v_xx'+v_yy'-z'\\ &=&x_xa_1+v_ya_2-a_3=0 \end{eqnarray*} and $$ \psi(0)=v(x(s',0),y(s',0))-z(s',0) =0 $$ since \(v\) is a solution of the differential equation and satisfies the initial condition by assumption. Thus, \(\psi(t)\equiv0\), i. e., $$ v(x(s',t),y(s',t))-z(s',t)=0. $$ Set \(t=t'\), then $$ v(x',y')-z(s',t')=0, $$ which shows that \(v(x',y')=u(x',y')\) because of \(z(s',t')=u(x',y')\).
\(\Box\)
Remark. In general, there is no uniqueness if the initial curve \(\Gamma\) is a characteristic curve, see an exercise and Figure 2.2.2.3, which illustrates this case.
Figure 2.2.2.3: Multiple solutions
Examples
Example 2.2.2.1:
Consider the Cauchy initial value problem
$$ u_x+u_y=0 $$ with the initial data $$ x_0(s)=s,\ y_0(s)=1,\ z_0(s)\ \mbox{is a given}\ C^1\mbox{-function}. $$ These initial data are non-characteristic since \(y_0'a_1-x_0'a_2=-1\). The solution of the associated system of characteristic equations $$ x'(t)=1,\ y'(t)=1,\ u'(t)=0 $$ with the initial conditions $$ x(s,0)=x_0(s),\ y(s,0)=y_0(s),\ z(s,0)=z_0(s) $$ is given by $$ x=t+x_0(s),\ y=t+y_0(s),\ z=z_0(s) , $$ i. e., $$ x=t+s,\ y=t+1,\ z=z_0(s). $$ It follows \(s=x-y+1,\ t=y-1\) and that \(u=z_0(x-y+1)\) is the solution of the Cauchy initial value problem.
Example 2.2.2.2:
A problem from kinetics in chemistry. Consider for \(x\ge0\), \(y\ge0\) the problem
$$ u_x+u_y=\left(k_0e^{-k_1x}+k_2\right)(1-u) $$ with initial data $$ u(x,0)=0,\ x>0,\ \mbox{and}\ u(0,y)=u_0(y),\ y>0. $$ Here the constants \(k_j\) are positive, these constants define the velocity of the reactions in consideration, and the function \(u_0(y)\) is given. The variable \(x\) is the time and \(y\) is the height of a tube, for example, in which the chemical reaction takes place, and \(u\) is the concentration of the chemical substance.
In contrast to our previous assumptions, the initial data are not in \(C^1\). The projection \({\mathcal C}_1\cup {\mathcal C}_2\) of the initial curve onto the \((x,y)\)-plane has a corner at the origin, see Figure 2.2.2.4.
Figure 2.2.2.4: Domains to the chemical kinetics example
The associated system of characteristic equations is
$$ x'(t)=1,\ y'(t)=1,\ z'(t)=\left(k_0e^{-k_1x}+k_2\right)(1-z). $$ It follows \(x=t+c_1\), \(y=t+c_2\) with constants \(c_j\). Thus the projection of the characteristic curves on the \((x,y)\)-plane are straight lines parallel to \(y=x\). We will solve the initial value problems in the domains \(\Omega_1\) and \(\Omega_2\), see Figure 2.2.2.4, separately.
(i)
The initial value problem in \(\Omega_1\). The initial data are $$ x_0(s)=s,\ y_0(s)=0, \ z_0(0)=0,\ s\ge 0. $$ It follows $$ x=x(s,t)=t+s,\ y=y(s,t)=t. $$ Thus $$ z'(t)=(k_0e^{-k_1(t+s)}+k_2)(1-z),\ z(0)=0. $$ The solution of this initial value problem is given by $$ z(s,t)=1-\exp\left(\frac{k_0}{k_1}e^{-k_1(s+t)}-k_2t-\frac{k_0}{k_1}e^{-k_1s}\right). $$ Consequently $$ u_1(x,y)=1-\exp\left(\frac{k_0}{k_1}e^{-k_1x}-k_2y-{k_0}{k_1}e^{-k_1(x-y)}\right) $$ is the solution of the Cauchy initial value problem in \(\Omega_1\). If time \(x\) tends to \(\infty\), we get the limit $$ \lim_{x\to\infty} u_1(x,y)=1-e^{-k_2y}. $$
(ii)
The initial value problem in \(\Omega_2\). The initial data are here
$$
x_0(s)=0,\ y_0(s)=s, \ z_0(0)=u_0(s),\ s\ge 0. $$ It follows $$ x=x(s,t)=t,\ y=y(s,t)=t+s. $$ Thus $$ z'(t)=(k_0e^{-k_1t}+k_2)(1-z),\ z(0)=0. $$ The solution of this initial value problem is given by $$ z(s,t)=1-(1-u_0(s))\exp\left(\frac{k_0}{k_1}e^{-k_1t}-k_2t-\frac{k_0}{k_1}\right). $$ Consequently $$ u_2(x,y)=1-(1-u_0(y-x))\exp\left(\frac{k_0}{k_1}e^{-k_1x}-k_2x-\frac{k_0}{k_1}\right) $$ is the solution in \(\Omega_2\).
If \(x=y\), then
\begin{eqnarray*} u_1(x,y)&=&1-\exp\left(\frac{k_0}{k_1}e^{-k_1x}-k_2x-\frac{k_0}{k_1}\right)\\ u_2(x,y)&=&1-(1-u_0(0))\exp\left(\frac{k_0}{k_1}e^{-k_1x}-k_2x-\frac{k_0}{k_1}\right). \end{eqnarray*} If \(u_0(0)>0\), then \(u_1<u_2\) if \(x=y\), i. e., there is a jump of the concentration of the substrate along its burning front defined by \(x=y\). Remark. Such a problem with discontinuous initial data is called Riemann problem. See an exercise for another Riemann problem.
The case that a solution of the equation is known
Here we will see that we get immediately a solution of the Cauchy initial value problem if a solution of the
homogeneous linear equation $$ a_1(x,y)u_x+a_2(x,y)u_y=0 $$ is known.
Let
$$ x_0(s),\ y_0(s),\ z_0(s),\ s_1<s<s_2 $$ be the initial data and let \(u=\phi(x,y)\) be a solution of the differential equation. We assume that $$ \phi_x(x_0(s),y_0(s))x_0'(s)+\phi_y(x_0(s),y_0(s))y_0'(s)\not=0 $$ is satisfied. Set $$ g(s)=\phi(x_0(s),y_0(s)) $$ and let \(s=h(g)\) be the inverse function. The solution of the Cauchy initial problem is given by \(u_0\left(h(\phi(x,y))\right)\).
This follows since in the problem considered a composition of a solution is a solution again, see an exercise, and since
$$ u_0\left(h(\phi(x_0(s),y_0(s))\right)=u_0(h(g))=u_0(s). $$
Example 2.2.2.3:
Consider equation
$$ u_x+u_y=0 $$ with initial data $$ x_0(s)=s,\ y_0(s)=1,\ u_0(s)\ \mbox{is a given function}. $$ A solution of the differential equation is \(\phi(x,y)=x-y\). Thus $$ \phi((x_0(s),y_0(s))=s-1 $$ and $$ u_0(\phi+1)=u_0(x-y+1) $$ is the solution of the problem. |
Category:Boolean Algebras
Furthermore, these operations are required to satisfy the following axioms:
\((BA_1 \ 0)\) $:$ $S$ is closed under $\vee$, $\wedge$ and $\neg$ \((BA_1 \ 1)\) $:$ Both $\vee$ and $\wedge$ are commutative \((BA_1 \ 2)\) $:$ Both $\vee$ and $\wedge$ distribute over the other \((BA_1 \ 3)\) $:$ Both $\vee$ and $\wedge$ have identities $\bot$ and $\top$ respectively \((BA_1 \ 4)\) $:$ $\forall a \in S: a \vee \neg a = \top, a \wedge \neg a = \bot$
Furthermore, these operations are required to satisfy the following axioms:
\((BA_2 \ 0)\) $:$ Closure: \(\displaystyle \forall a, b \in S:\) \(\displaystyle a \vee b \in S \) \(\displaystyle a \wedge b \in S \) \(\displaystyle \neg a \in S \) \((BA_2 \ 1)\) $:$ Commutativity: \(\displaystyle \forall a, b \in S:\) \(\displaystyle a \vee b = b \vee a \) \(\displaystyle a \wedge b = b \wedge a \) \((BA_2 \ 2)\) $:$ Associativity: \(\displaystyle \forall a, b, c \in S:\) \(\displaystyle a \vee \left({b \vee c}\right) = \left({a \vee b}\right) \vee c \) \(\displaystyle a \wedge \left({b \wedge c}\right) = \left({a \wedge b}\right) \wedge c \) \((BA_2 \ 3)\) $:$ Absorption Laws: \(\displaystyle \forall a, b \in S:\) \(\displaystyle \left({a \wedge b}\right) \vee b = b \) \(\displaystyle \left({a \vee b}\right) \wedge b = b \) \((BA_2 \ 4)\) $:$ Distributivity: \(\displaystyle \forall a, b, c \in S:\) \(\displaystyle a \wedge \left({b \vee c}\right) = \left({a \wedge b}\right) \vee \left({a \wedge c}\right) \) \(\displaystyle a \vee \left({b \wedge c}\right) = \left({a \vee b}\right) \wedge \left({a \vee c}\right) \) \((BA_2 \ 5)\) $:$ Identity Elements: \(\displaystyle \forall a, b \in S:\) \(\displaystyle \left({a \wedge \neg a}\right) \vee b = b \) \(\displaystyle \left({a \vee \neg a}\right) \wedge b = b \)
A
Boolean algebra is an algebraic structure $\left({S, \vee, \wedge}\right)$ such that:
\((BA \ 0)\) $:$ $S$ is closed under both $\vee$ and $\wedge$ \((BA \ 1)\) $:$ Both $\vee$ and $\wedge$ are commutative \((BA \ 2)\) $:$ Both $\vee$ and $\wedge$ distribute over the other \((BA \ 3)\) $:$ Both $\vee$ and $\wedge$ have identities $\bot$ and $\top$ respectively \((BA \ 4)\) $:$ $\forall a \in S: \exists \neg a \in S: a \vee \neg a = \top, a \wedge \neg a = \bot$ The operations $\vee$ and $\wedge$ are called join and meet, respectively.
The identities $\bot$ and $\top$ are called
bottom and top, respectively.
Also, $\neg a$ is called the
complement of $a$.
The operation $\neg$ is called
complementation. Subcategories
This category has only the following subcategory.
B ► Boolean Lattices (4 P) Pages in category "Boolean Algebras"
The following 24 pages are in this category, out of 24 total.
C Cancellation of Join in Boolean Algebra Cancellation of Meet in Boolean Algebra Complement in Boolean Algebra is Unique Complement of Bottom Complement of Bottom (Boolean Algebras) Complement of Bottom/Boolean Algebra Complement of Complement (Boolean Algebras) Complement of Top Complement of Top (Boolean Algebras) Complement of Top/Boolean Algebra |
Euclidean Algorithm/Examples Contents 1 Examples of Use of Euclidean Algorithm 1.1 GCD of $341$ and $527$ 1.2 GCD of $2190$ and $465$ 1.3 GCD of $9 n + 8$ and $6 n + 5$ 1.4 Solution of $31 x \equiv 1 \pmod {56}$ 1.5 GCD of $108$ and $243$ 1.6 GCD of $132$ and $473$ 1.7 GCD of $129$ and $301$ 1.8 GCD of $156$ and $1740$ 1.9 GCD of $299$ and $481$ 1.10 GCD of $361$ and $1178$ 1.11 GCD of $527$ and $765$ 1.12 GCD of $2145$ and $1274$ 1.13 GCD of $12321$ and $8658$ Examples of Use of Euclidean Algorithm
The GCD of $341$ and $527$ is found to be:
$\gcd \set {341, 527} = 31$
$31$ can be expressed as an integer combination of $341$ and $527$:
$31 = 2 \times 527 - 3 \times 341$
The GCD of $2190$ and $465$ is found to be:
$\gcd \set {2190, 465} = 15$
Hence $15$ can be expressed as an integer combination of $2190$ and $465$:
$15 = 33 \times 465 - 7 \times 2190$
The GCD of $9 n + 8$ and $6 n + 5$ is found to be:
$\gcd \set {9 n + 8, 6 n + 5} = 1$
Hence:
$2 \paren {9 n + 8} - 3 \paren {6 n + 5} = 1$
Let $x \in \Z$ be an integer such that:
$31 x \equiv 1 \pmod {56}$
Then by using the Euclidean Algorithm:
$x = -9$
is one such $x$.
The GCD of $108$ and $243$ is:
$\gcd \set {108, 243} = 27$
The GCD of $132$ and $473$ is:
$\gcd \set {132, 473} = 11$
The GCD of $129$ and $301$ is found to be:
$\gcd \set {129, 301} = 43$
Hence $43$ can be expressed as an integer combination of $129$ and $301$:
$43 = 1 \times 301 - 2 \times 129$
The GCD of $156$ and $1740$ is:
$\gcd \set {156, 1740} = 12$
The GCD of $299$ and $481$ is found to be:
$\gcd \set {299, 481} = 13$
Hence $13$ can be expressed as an integer combination of $299$ and $481$:
$13 = 5 \times 481 - 8 \times 299$
The GCD of $361$ and $1178$ is:
$\gcd \set {361, 1178} = 19$
The GCD of $527$ and $765$ is:
$\gcd \set {527, 765} = 17$
The GCD of $2145$ and $1274$ is:
$\gcd \set {2145, 1274} = 13$
Hence $13$ can be expressed as an integer combination of $2190$ and $465$:
$13 = 32 \times 1274 - 19 \times 2145$
The GCD of $12321$ and $8658$ is:
$\gcd \set {12321, 8658} = 333$ |
How to Automate Meshing in Frequency Bands for Acoustic Simulations
Think of the curved lid of an elegant grand piano. The curve corresponds to the strings’ length, which corresponds to the perception of the pitch. This visual represents an important element of acoustics: Our perception of pitch is logarithmic. This means that there is a large frequency range involved in acoustics phenomena. In turn, when modeling acoustics problems, there is a large wavelength range to be meshed. But how?
Introduction to Free-Field FEM Wave Problems
A large frequency range needs to be computed, which means large wavelength ranges need to be resolved by the mesh. To efficiently mesh large frequency ranges, we can optimize the mesh element size by remeshing for a given frequency range when using finite element method (FEM) interfaces in the COMSOL Multiphysics® software.
The finite element method is implemented in most interfaces in COMSOL Multiphysics, including the
Pressure Acoustics, Frequency Domain and the Pressure Acoustics, Transient interfaces. Other interfaces in the Acoustics Module are optimized for their intended purpose by implementing the boundary element method (BEM), ray tracing, or dG-FEM (time explicit). When using the Pressure Acoustics interface, FEM uses a mesh to discretize the geometry and solves the acoustic wave equation at these points. The full, continuous solution is interpolated from these points. An automotive muffler with a porous lining, modeled using the pressure acoustics functionality in the COMSOL® software.
When meshing an FEM model, we need to get a good approximation of the geometry and include details of the physics. When using the
Pressure Acoustics interface, we always need to resolve the acoustic waves. A good mesh resolves the geometry and the physics of the model, but a great mesh accurately solves the problem and also uses the smallest number of mesh elements possible. In this blog post, we will look at how to mesh free-field/open-ended problems with the fewest mesh points.
Mesh elements are comprised of nodes. For a linear mesh element, the nodes are located at the vertices. Second-order polynomial interpolation is the default shape function for wave equations in COMSOL Multiphysics. Second-order (or quadratic) elements have one additional node along the length of the element and resolve waves accurately. For free-field wave problems, we need to have about 10 or 12 nodes per wavelength to resolve the wave. Consequentially, for wave-based modeling with quadratic elements, we need 5 or 6 second-order elements per wavelength (hmax = \lambda_0/5). For short wavelengths (higher frequencies), the element size needs to be smaller than at lower frequencies.
Audio applications, which are concerned with human perception, have a frequency range of 20 Hz to 20 kHz. In air at room temperature, audio problems have a wavelength range from about 17 m to 17 mm. If we were to compute over the entire human auditory frequency range with one mesh, we would need to resolve for the wavelengths that correspond to 20 kHz. At the high-frequency end, this leads to a maximum element size, or spatial resolution, of (17 mm/5 =) 3.2 mm. Resolving the mesh for the highest frequency leads to an excessively dense mesh for the low-frequency predictions. At 20 Hz, the wavelength is 17 m and would have 5360 nodes per wavelength, far more than the 10 or 12 that is required. Each node corresponds to a memory allocation for the computer. While this dense mesh approach is great from an accuracy perspective, the excessively dense mesh takes up computational resources and consequentially takes longer time to compute.
Efficient Meshes in COMSOL Multiphysics® Setup for Single-Octave Mesh
To avoid an inefficient meshing approach, we can split the problem into smaller frequency bands; initially, one octave, where the mesh for each frequency band is resolved according to its upper frequency limit. In this example, the center frequency, f_{C,n}, is referenced from f_0, the prescribed frequency,
f_{C,n} = 2^n \times f_0,
where n is the octave band number from the reference (positive n is higher-pitch octaves, negative n is lower-pitch octaves).
The upper and lower frequency band limits are defined from the center-band frequency
f_L = 2^{-\frac{1}{2}} \times f_{C,n} ,
f_U = 2^{\frac{1}{2}} \times f_{C,n}
Note that f_U is twice f_L (thus one octave higher).
Defining the octaves in the model parameters.
We can use these parameters in the frequency-domain study using the
range() function to define a logarithmic distribution of points within each band
10^{\textrm{range}(\log_{10}(f_L), df_\textrm{log}, \log_{10}(f_U) – df)},
The logarithmic frequency spacing, df_\textrm{log} = (\log_{10}(f_U)-\log_{10}(f_L))/(N-1), is set by the frequency range divided by the number of frequencies N.
Setting the frequencies solved for in each octave band.
The maximum mesh element size (traditionally given the variable name
hmax) is then taken from the higher limit of the given frequency band
hmax = 343[m/s]/f_U/5.
Note that if you do not know the speed of sound, you can use
comp1.mat1.def.cs(23[degC]) to access the speed of sound for the first material (in a list), defined in
Component 1 at 23°C. If you are using the built-in material Air, the speed of sound comes from the ideal gas law, so the fluid temperature is a required input. The custom mesh sequence with the parameter hmax applied to the Maximum element size .
The
Maximum element size is applied to the mesh on the Size node. The elements can be smaller than this constraint if smaller geometry details needs to be resolved, as shown in the figure below. The smallest element is controlled by the Minimum element size setting. The Curvature factor and Resolution of narrow regions settings are also important mesh settings. The mesh element quality shown on the top for two octave bands. Setup for Multiple Octave Bands
If the COMSOL Multiphysics model is set up as described above, it would yield one octave’s worth of frequencies. However, we need up to 10 octaves for our audio investigations.
A parametric sweep over n, such that each value of n is an octave and the upper and lower frequency limits change accordingly.
To implement a parametric sweep in COMSOL Multiphysics, a
Parametric Sweep study step is added to the study to change the frequency bands. The benefit of working with parameters is that all of the frequency band limits change automatically when the parameter sweep variable n changes. The parameter n is the natural choice for the parameter sweep because each value of n corresponds to a frequency band. Setting it up in this way means that the original frequency is now the reference frequency and must be chosen appropriately.
For the results shown below, the same frequencies were computed over the same range with the mesh for the highest frequency. The study that splits the mesh according to the octave band number took 32 s, whereas the single-mesh approach took 79 s. This shows a significant savings of time and computational resources.
The instantaneous pressure is shown on the bottom for the different frequencies and meshes.
The
Octave Band plot type is used to calculate the required response. Ensure that the line markers are placed in data points. Alternatively, to obtain a continuous line, change the x-Axis Data to Expression and enter freq, the variable for frequency. Plotting the continuous line. Choose Point Graph and ensure that the plot settings are set up as shown above. Setup for n th-Octaves Bands
The previous discussion sets up the problem in octave bands. However, you can use the general form of
f_{C,n} = 2^\frac{n}{b} \times f_0
f_L = 2^{-\frac{1}{2b}} \times f_{C,n} ,
f_U = 2^{\frac{1}{2b}} \times f_{C,n} ,
to allow fractions of octave bands. In the above setup, let b = 3 for third octave bands or 6 for sixth octave bands. The narrower the frequency band, the more times the meshing sequence runs, so there is a balance to be struck.
The parameters that set up the general meshing procedure in any octave band are located in the Remeshing in Frequency Bands model. It is easy to save the necessary parameters in a .txt file and load them when setting up a new model. This avoids having to enter them every time.
Discussion and Caveats of Meshing in Frequency Bands for Acoustics Simulations
The method presented in this blog post uses canonical geometry to clearly illustrate the process for optimizing the mesh. Consequentially, the meshing routine takes relatively little time. For realistic geometries, the meshing routine may take longer and the benefits may be less marked. In this instance, you should defeature or use virtual operations to reduce any physically irrelevant geometry.
For some problems, the temperature or density of the fluid may change significantly over the computational domain. If this occurs, the speed of sound will change and must be included in the model. The mesh must be dense enough to reflect this.
This discussion is not relevant to the
Ray Tracing, Pressure Acoustics, Boundary Element, or Acoustic Diffusion interfaces. With care, the information in this blog post can be applied to free-field problems of the Aeroacoustics and Thermoviscous Acoustics interfaces or the dG-FEM-based Ultrasound interfaces. The convective effect of the flow alters the wavelength, and a sophisticated mesh should reflect this up- or downstream of a source. The Linearized Navier-Stokes and Linearized Euler interfaces have default linear interpolation, so 10 or 12 elements are required per wavelength. The Thermoviscous Acoustics interface is designed for resolving the acoustic boundary layer. The thickness of this layer is also frequency dependent, and a similar method to the one discussed here can be used for efficient meshing and resolution of the layer.
Finally, the discussion in this blog post explicitly assumes that the wavelength is known. This assumption is usually the case for free-field modeling, however for bounded, resonant problems, the total sound field is dependent on boundary condition values and the location of the boundaries. This means that the pressure amplitudes can have shapes with an analogous wavelength that could be significantly shorter than the free-field wavelength. To get an accurate solution, you must perform a mesh convergence study.
Conclusion
This blog post has demonstrated that remeshing in frequency bands can save a significant amount of time. In COMSOL Multiphysics, this is implemented by parameterizing the upper- and lower-frequency band limits. The approach demonstrated here is applicable for interfaces that implement FEM and have quadratic interpolation.
Next Steps
Try it yourself: Click the button below to access the MPH-file for the model discussed in this blog post. Note that you must log into COMSOL Access and have a valid software license to download the file.
Read More
Learn more about how to enhance your meshing processes on the COMSOL Blog:
Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
The Eyring Equation, developed by Henry Eyring in 1935, is based on transition state theory and is used to describe the relationship between reaction rate and temperature. It is similar to the Arrhenius Equation, which also describes the temperature dependence of reaction rates. However, whereas Arrhenius Equation can be applied only to gas-phase kinetics, the Eyring Equation is useful in the study of gas, condensed, and mixed-phase reactions that have no relevance to the collision model.
Introduction
The Eyring Equation gives a more accurate calculation of rate constants and provides insight into how a reaction progresses at the molecular level. The Equation is given below:
\[ k = \dfrac{k_BT}{h}e^{-\frac{\bigtriangleup H^\ddagger}{RT}}e^\frac{\bigtriangleup S^\ddagger}{R} \label{1}\]
Consider a bimolecular reaction:
\[A~+B~\rightarrow~C \label{2}\]
\[K = \dfrac{[C]}{[A][B]} \label{3}\]
where \(K\) is the equilibrium constant. In the transition state model, the activated complex AB is formed:
\[A~+~B~\rightleftharpoons ~AB^\ddagger~\rightarrow ~C \label{4}\]
\[K^\ddagger=\dfrac{[AB]^\ddagger}{[A][B]} \label{5}\]
There is an energy barrier, called activation energy, in the reaction pathway. A certain amount of energy is required for the reaction to occur. The transition state, \(AB^\ddagger\), is formed at maximum energy. This high-energy complex represents an unstable intermediate. Once the energy barrier is overcome, the reaction is able to proceed and product formation occurs.
Figure \(\PageIndex{1}\) : Reaction coordinate diagram for the bimolecular nucleophilic substitution (\(S_N2\)) reaction between bromomethane and the hydroxide anion. Image used with permission form Wikipedia.
The rate of a reaction is equal to the number of activated complexes decomposing to form products. Hence, it is the concentration of the high-energy complex multiplied by the frequency of it surmounting the barrier.
\[\begin{eqnarray} rate~&=&~v[AB^\ddagger] \label{6} \\ &=&~v[A][B]K^\ddagger \label{7} \end{eqnarray} \]
The rate can be rewritten:
\[rate~=~k[A][B] \label{8}\]
Combining Equations \(\ref{8}\) and \(\ref{7}\) gives:
\[ \begin{eqnarray} k[A][B]~&=&~v[A][B]K^\ddagger \label{9} \\ k~&=&~vK^\ddagger \label{10} \end{eqnarray} \]
where
\(v\) is the frequency of vibration, \(k\) is the rate constant and \(K ^\ddagger \) is the thermodynamic equilibrium constant.
The frequency of vibration is given by:
\[v~=~\dfrac{k_BT}{h} \label{11}\]
where
\(k_B\) is the Boltzmann's constant (1.381 x 10 -23J/K), \(T\) is the absolute temperature in Kelvin (K) and \(h\) is Planck's constant (6.626 x 10 -34Js).
Substituting Equation \(\ref{11}\) into Equation \(\ref{10}\) :
\[k~=~\dfrac{k_BT}{h}K^\ddagger \label{12}\]
Equation \({ref}\) is often tagged with another term \((M^{1-m})\) that makes the units equal with \(M\) is the molarity and \(m\) is the molecularly of the reaction.
\[k~=~\dfrac{k_BT}{h}K^\ddagger (M^{1-m}) \label{E12}\]
The following thermodynamic equations further describe the equilibrium constant:
\[ \begin{eqnarray} \Delta G^\ddagger~&=&~-RT\ln{K^\ddagger}\label{13} \\ \Delta G^\ddagger~&=&~\Delta H^\ddagger~-~T\Delta S^\ddagger \label{14} \end{eqnarray} \]
where \(\Delta G^\ddagger\) is the Gibbs energy of activation, \(\Delta H^\ddagger\) is the
enthalpy of activation and \(\Delta S^\ddagger\) is the entropy of activation. Combining Equations \(\ref{10}\) and \(\ref{11}\) to solve for \(\ln K ^\ddagger \)
\[\ln{K}^\ddagger~=~-\dfrac{\Delta H^\ddagger}{RT}~+~\dfrac{\Delta S^\ddagger}{R} \label{15}\]
The Eyring Equation is finally given by substituting Equation \(\ref{15}\) into Equation \(\ref{12}\):
\[ k~=~\dfrac{k_BT}{h}e^{-\frac{\Delta H^\ddagger}{RT}}e^\frac{\Delta S^\ddagger}{R} \label{16}\]
Application of the Eyring Equation
The linear form of the Eyring Equation is given below:
\[\ln{\dfrac{k}{T}}~=~\dfrac{-\Delta H^\dagger}{R}\dfrac{1}{T}~+~\ln{\dfrac{k_B}{h}}~+~\dfrac{\Delta S^\ddagger}{R} \label{17}\]
The values for \(\Delta H^\ddagger\) and \(\Delta S^\ddagger\) can be determined from kinetic data obtained from a \(\ln{\dfrac{k}{T}}\) vs. \(\dfrac{1}{T}\) plot. The Equation is a straight line with negative slope, \(\dfrac{-\Delta H^\ddagger}{R}\), and a y-intercept, \(\dfrac{\Delta S^\ddagger}{R}+\ln{\dfrac{k_B}{h}}\).
References Chang, Raymond. Physical Chemistry for the Biosciences.USA: University Science Books, 2005. Page 338-342. Contributors Kelly Louie |
Exercise 1.14 of the book Rordam, Larsen and Laustsen "An introduction to K-theory for C*-algebras" asks to prove, that upper triangular matrix with elements from some C*-algebra $A$ is invertible in $M_n(A)$ iff all diagonal entries are invertible in $A$.
Trying to solve this I've found that if $a$ is invertible and $\delta$ is such that $(a^{-1}\delta)^n=0$ then $a+\delta$ is invertible too and its inverse is given by $(a+\delta)^{-1}=\sum_{k=0}^{n} (-a^{-1}\delta)^ka^{-1}$. Using this fact I can show that if diagonal is invertible, then upper-triangular matrix with this diagonal is invertible too, and also that if upper-triangular matrix has upper-triangular inverse, then its diagonal is invertible. So all I need to prove is that if upper-triangular matrix invertible, then its inverse is upper-triangular. I've failed to prove this.
Also there is a hint for this exercise: "Solve the equation $ab=1$ where $a$ is as above [i.e. upper-triangular matrix] and where $b$ is unknown upper triangular matrix". Solution of this equation follows from my reasoning above, but this doesn't help.
Update (counterexample attempt): I've made one more attempt and it looks for me like I have found a counterexample. However I think there is a mistake in it (because otherwise there is a mistake in the book). Here it is. Let $A=B(l^2(\mathbb{N}))$ --- algebra of bounded operators on sequences $x=\{x_i\}_ {i=1}^ \infty:\|x\|^2=\sum_{x=1}^{\infty}|x_i|^2<\infty$. Let $z\in A$ be defined by $(zx)_ {2n-1}=0$, $(zx)_{2n}=x_n$, and $t\in A$ be defined by $(tx)_{2n-1}=x_n$, $(tx)_ {2n}=0$. Then we have $t^*t=z^ * z=tt^* +zz^* =1$, $t^* z=z^* t=0$. From these we have that$$\begin{pmatrix}z&tz^* \\\ 0&t^* \end{pmatrix}\begin{pmatrix}z^* &0\\\ zt^* &t\end{pmatrix}=\begin{pmatrix}1&0\\\ 0&1\end{pmatrix}$$ and$$\begin{pmatrix}z^* &0\\\ zt^* &t\end{pmatrix}\begin{pmatrix}z&tz^* \\\ 0&t^* \end{pmatrix}=\begin{pmatrix}1&0\\\ 0&1\end{pmatrix}.$$ So now my question should say "Where am I wrong?". |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.