text
stringlengths 256
16.4k
|
---|
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
Short answer: if $\frac 1 \pi$ is a normal number in base $2$, then the series converges in measure (but not necessarily converges in the usual sense). However, the normality of $\pi$ and $\frac 1 \pi$ has not yet been proved (and it's not known could it be proved). I didn't attempt to prove the reverse, that from convergency in measure follows normality of $\frac 1 \pi$.
Disclaimer: I would be glad if someone with good knowledge of measure theory checked and maybe helped to improve and make more rigorous the part that justifies introduction of the probability space.$\DeclareMathOperator{\E}{\mathbb{E}}$$\DeclareMathOperator{\Var}{Var}$$\DeclareMathOperator{\Cov}{Cov}$
First, because of periodicity of sine, $\sin\left(2^n\right) = \sin\left(2\pi\left\{\frac{2^n}{2\pi}\right\}\right) =\sin\left(2\pi \left\{2^{n-1}\frac{1}{\pi}\right\}\right),$ where $\{\}$ denotes taking a fractional part of a number.
I define a number $c$ to be normal in base $2$ if $$\left (\forall(a,b)\in\left\{(a,b):a\in(0,1)\wedge\ b\in (a,1)\right\}:\lim_{N\to\infty}\left(\frac 1 N\sum_{n=1}^{N}I\left[\left\{2^n c\right\}\in(a,b)\right] \right)=b-a,\right)\wedge\\\left(\forall a\in[0,1]: \lim_{N\to\infty}\left(\frac 1 N \sum_{n=1}^{N}I\left[\left\{2^nc\right\}=a\right]\right)=0\right).$$This definition could be found e.g. here, page 127.
Below I'm assume that $\frac 1 \pi$ is a normal number in base $2.$
Let's introduce a probability space $(\Sigma, \mathcal{F}, P)$ satisfying Kolmogorov's axioms taking $\Sigma = (0,1),$ $\mathcal{F}$ a Borel algebra on $(0,1)$ and a probability measure $P$ so that the the measure of an open interval is equal to its length, and the measure of a point is equal to zero. This probability space describes probability space of a uniformly distributed on $(0,1)$ random value.
There are two remarks here. First, the probability I'm talking about in this answer is the frequentist probability that draw consequences from infinite (but fixed) sequences of points with known distribution they are sampled from. It is not the Bayesian probability that characterizes degrees of belief. Second, there is a theory of measurable spaces that generalizes the concept of probability space with less restrictive axioms. I don't use it because more restrictive probability axioms are enough for this problem and at this moment I'm more familiar with probability theory rather than general measure theory.
Let's define a sequence of real number $\xi_n$ to be so that$$\left (\forall(a,b)\in\left\{(a,b):a\in(0,1)\wedge\ b\in (a,1)\right\}:\lim_{N\to\infty}\left(\frac 1 N\sum_{n=1}^{N}I\left[\xi_n \in(a,b)\right] \right)=b-a,\right)\wedge\\\left(\forall a\in[0,1]: \lim_{N\to\infty}\left(\frac 1 N \sum_{n=1}^{N}I\left[\xi_n=a\right]\right)=0\right)\wedge\\\left(\forall n > 0: \left\{2\xi_{n} - \xi_{n+1}\right\} = 0\right).$$
This sequence can be drawn from our probability space, thus in the frequentist probability language it is a uniformly distributed sequence drawn from our probability space. Simultaneously a sequence $\{2^n c\}$ satisfies all conditions for $\xi_n,$ so it is possible to work with $\{2^n c\}$ as with a particular fixed sequence drawn from our probability space.
I call a series $\sum_{n=1}^\infty x_n$ converging in measure to value $S$ if $$\forall \varepsilon > 0: \lim\limits_{N \to \infty} \left(\frac 1 N \sum_{n=1}^{N} I \left[\left|S - \sum_{n=1}^{N} x_n \right| > \varepsilon \right] \right) = 0.$$
From this definition of convergency in measure it follows that the series converge iff$$\forall \varepsilon > 0 \ \exists N > 0: \lim\limits_{M\to\infty}\left(\frac 1 M \sum_{n=N}^{N+M} I\left[\left|\sum_{n=N}^{N+M} x_n\right| > \varepsilon \right] \right) = 0.$$
This expression is what I'm aiming to prove for $x_n=\frac{\sin(2\pi \xi_n)}{n}.$
Because $\xi_n$ as it is argued above could be treated as a sample from the defined above probability space, the sequences $\frac 1 M \sum_{n=N}^{N+M} I\left[\left|\sum_{n=N}^{N+M} x_n\right| > \varepsilon \right]$ and $\frac 1 M \sum_{n=N}^{N+M} I\left[\left|\sum_{n=N}^{N+M} x_n\right| > \varepsilon \right]$ become samples from corresponding to them probability spaces too, and the properties of the space they are sampled from could be inferred.
Thus the convergency in measure criterion defined above could be rephrased in terms of the corresponding to the sequence probability space as $$\forall \varepsilon > 0 \ \exists N > 0: \lim\limits_{M\to\infty} P\left(\left|\sum_{n=N}^{N+M} x_n\right| > \varepsilon \right) = 0.$$In probability theory this type of convergency is called convergency in probability, and it is a weaker type of convergency than convergency with probability $1.$
Let's define $\Delta_{N,M}$ as $\Delta_{N,M} = \sum_{n=N}^{N+M} x_n.$ Then $\E\left[\Delta_{N,M}\right] = 0$ because sequence $2\pi \xi_n$ is uniform on $(0, 2\pi)$ and sine is symmetric.
From Chebyshev inequality $\forall \varepsilon > 0:\ P\left(\left|\Delta\right| > \varepsilon\right) < \frac{\E\left[\Delta_{N,M}^2\right]}{\varepsilon^2}.$ Thus to show the convergency in probability it is enough to show that $\lim\limits_{N\to\infty}\lim\limits_{M\to\infty} \E\left[\Delta_{N,M}^2\right] = 0.$
Let's show that $\lim\limits_{N\to\infty}\lim\limits_{M\to\infty} \E\left[\Delta_{N,M}^2\right] = 0$ using the idea from this question.
The variance of $\Delta_{N,M}$ could be expressed as
$$\E\left[\Delta_{N,M}^2\right] =\E\left[\left(\sum\limits_{n=N}^M x_n\right)^2\right] =\E\left[\sum\limits_{n=N}^M \sum\limits_{k=N}^M x_n x_k \right] =\sum\limits_{n=N}^M \sum\limits_{k=N}^M \E\left[ x_n x_k \right] =\\2\sum\limits_{n=N}^M \sum\limits_{k=n+1}^{M} \E\left[ x_n x_k \right] +\sum\limits_{n=N}^M \E\left[ x_n x_n \right] \leq2\sum\limits_{n=N}^M \sum\limits_{k=n}^{M} \E\left[ x_n x_k \right] =2\sum\limits_{n=N}^M \sum\limits_{k=0}^{M-n} \left|\E\left[ x_n x_{n+k} \right]\right|.$$
From this$$\left|\E\left[ x_n x_{n+k} \right]\right| = \left|\E\left[ \frac{\sin\left(2\pi \xi_n\right) \sin\left(2\pi \xi_{n+k}\right)}{n(n + k)}\right]\right|,$$and as it is shown in Appendix 1,$$\left|\E\left[ \frac{\sin\left(2\pi \xi_n\right) \sin\left(2\pi \xi_{n+k}\right)}{n(n + k)}\right]\right| \leq \frac {2^{-k}}{n(n+k)}.$$
So $$\E\left[\Delta_{N,M}^2\right] \leq \sum\limits_{n=N}^M \sum\limits_{k=0}^{M - n} \frac{2^{-k}}{n(n+k)}.$$
As it is snown in Appendix 2, $$\lim_{N \rightarrow \infty} \lim_{M \rightarrow \infty} \sum\limits_{n=N}^M \sum\limits_{k=0}^{M - n} \frac{2^{-k}}{n(n+k)}=0,$$
and from this it follows that $\Delta_{N,M}$ converges in probability to zero as $M \rightarrow \infty$ and $N \rightarrow \infty.$
So the series converges in probability, or in measure introduced by uniformely distributed (in case of normality of $\frac 1 \pi$) sequence of numbers $\xi_n = \left\{2^n \frac 1 \pi\right\}.$
Appendix 1
Let $\xi_n = \left\{2^n a\right\},$ where $a$ is a normal number, be written as a binary fraction $\xi_n = 0.b_{n,1}b_{n,2}b_{n,3}... = \sum\limits_{m=1}^\infty b_{n,m}2^{-m},$ where each number $b_{n,m}$ is either $0$ or $1.$ Then $\xi_{n+k} = \sum\limits_{m=1}^\infty b_{n,m}2^{-m+k}I\left[m < k\right] =\sum\limits_{m=1}^\infty b_{n,m+k}2^{-m}$ and $\xi_n = \sum\limits_{m=1}^{k-1}b_{n,m} 2^{-m} + 2^{-k}\xi_{n+k}.$
Using the same probability measure as in the main part of the answer, that treats $\xi_n$ as uniformly distributed random variables on $(0,1),$ it is possible to treat $b_{n,k} = \lfloor 2^k \xi_n \rfloor \mod 2$ as random variables too. For each $n$ $b_{n,1}$ and $b_{n,2}$ should be independent, i.e. all possible combinations of values of $b_{n,1}$ and $b_{n,2}$ should be equiprobable, otherwise probabilities of $\xi_n$ being in subsets $\left(0,\frac 1 4\right],$ $\left(\frac 1 4, \frac 1 2\right],$ $\left(\frac 1 2 , \frac 3 4 \right],$ and $\left(\frac 3 4, 1 \right )$ would not be equal, that contradicts the assumption about uniform distribution of $\xi_n.$
The independence of $B_{n,k} = \sum\limits_{m=1}^{k-1}b_{n,m} 2^{-m}$ and $b_{n,k},$ $k >1$ could be shown by induction by $k$ using the same argument about uniform distribution of $\xi_n$ from the previous paragraph. From this independence follows independence of $B_{n,k}$ and $\sum\limits_{m=k}^{\infty}b_{n,m} 2^{-m},$ which is equivalent to independence of $B_{n,k}$ and $\xi_{n+k}.$
Using the obtained results let's estimate absolute value of covariance of $\sin \zeta_n$ and $\sin \zeta_{n+k},$ where $\zeta_n = 2\pi \xi_n:$
$$\E\left[\sin \zeta_n \sin \zeta_{n+k}\right] = \E\left[\sin\left(2\pi B_{n,k} + \zeta_{n+k}2^{-k}\right) \sin \zeta_{n+k}\right].$$
Because $\sin\left(\alpha\beta\right) = \sin\alpha\cos\beta + \cos\alpha\sin\beta,$ $$\sin\left(2\pi B_{n,k} + \zeta_{n+k}2^{-k}\right) = \sin\left(2\pi B_{n,k}\right) \cos\left(\zeta_{n+k}2^{-k}\right) + \cos\left(2\pi B_{n,k}\right) \sin\left(\zeta_{n+k}2^{-k}\right) =\sin\left(2\pi B_{n,k}\right) + 2^{-k} \zeta_{n+k} \cos\left(2\pi B_{n,k}\right) + o(2^{-k}),$$and $$\E\left[\sin \zeta_n \sin \zeta_{n+k}\right] = \E\left[\sin\left(2\pi B_{n,k}\right) \sin \zeta_{n+k}\right] +\E\left[2^{-k} \zeta_{n+k} \cos\left(2\pi B_{n,k}\right) \sin \zeta_{n+k}\right] + o(2^{-k}).$$
From independence of $B_{n,k}$ and $\xi_{n+k}$ it follows that $\E\left[\sin\left(2\pi B_{n,k}\right) \sin \zeta_{n+k}\right] = 0.$
The absolute value of $\E\left[\cos\left(2\pi B_{n,k}\right)\right] = \sum\limits_{m=1}^{2^k}\cos\left(\frac{2\pi}{m}\right)$ is bounded by $1,$ and $\E\left[ \zeta_{n+k}\sin \zeta_{n+k} \right] = -2\pi,$ so the absolute value of $\E\left[\sin\zeta_{n} \sin\zeta_{n+k}\right]$ is bounded by $\frac C {2^k},$ where $C$ is some constant independent of $n.$
Appendix 2
Let's prove that for the double limit
$$\lim_{N \rightarrow \infty} \lim_{M \rightarrow \infty} \sum\limits_{n=N}^M \sum\limits_{k=0}^{M - n} \frac{2^{-k}}{n(n+k)}$$
the inner limit exists and the outer limit exists and is equal to zero.
The sum $\sum\limits_{k=0}^{M - n} \frac{2^{-k}}{n(n+k)}$ is bounded from above by $I_n = \frac 1 n \sum\limits_{k=0}^{\infty} \frac{2^{-k}}{n+k} = \frac 1 n \Phi\left(\frac 1 2, 1, n\right)$ for every $n,$ where $\Phi\left(z, s, a\right)$ is Lerch transcendent function. Using property 25.14.5 from this list, it is possible to rewrite $I_n$ as $\frac 2 n \int\limits_0^\infty \frac{e^{-nx}}{2-e^{-x}}dx.$ The integrand is bounded from above by $e^{-nx}$ and $I_n$ is bounded from above by $\frac 2 n \int\limits_0^\infty e^{-nx} dx = \frac 2 {n^2}.$
So
$$0 \leq \sum\limits_{n=N}^M \sum\limits_{k=0}^{M - n} \frac{2^{-k}}{n(n+k)} \leq 2 \sum\limits_{n=N}^M \frac {1}{n^2}.$$
The series $\sum\limits_{n=0}^\infty \frac{1}{n^2}$ converges as it could be shown using Maclaurin-Cauchy integral test, so as a consequence of the squeeze theorem the inner limit exists, and the outer limit exists and is equal to zero. |
In a textbook I'm reading, the author states without proof that $$ \zeta(s,mz) = \frac{1}{m^{s}} \sum_{k=0}^{m-1} \zeta \left(s,z+\frac{k}{m} \right), \tag{1}$$ where $\zeta(s,z) $ is the Hurwitz zeta function
Supposedly, this isn't hard to prove. But is it possible to prove $(1)$ using simply the series definition of the Hurwitz zeta function, that is, $ \displaystyle\zeta(s,z) = \sum_{n=0}^{\infty} \frac{1}{(z+n)^{s}}$?
It might be interesting to note that the polygamma functions (excluding the digamma function) can be expressed in terms of the Hurwitz zeta function.
So from $(1)$ we can derive the multiplication formula $$\psi_{n}(mz) = \frac{1}{m^{n+1}} \sum_{k=0}^{m-1}\psi_{n} \left(z+ \frac{k}{m} \right) , \quad n \in \mathbb{Z}_{>0}.$$ |
In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers.
Its much complex than just drawing some graph and fitting a line to divide these data. I should tell you that you gone entirely wrong about the core concept.
Our brain are capable of solving complex problem, few of which are impossible for a computer to figure out. Thus scientist studies our brain and tried to
mimic its biological structure to achieve its capability in computers. One of the initial steps towards it was perceptron.
Two types of perceptron are:
Single Layer perceptron - works only in linearly separable data. Multi Layer perceptron - work with non linear data too.
Perceptron have the
capability of learning from a given (training) data and implement that knowledge into another (new) data as we desire.
Your problem: (desired output)
classify what type of flower a particular flower is?
There are thousands and thousands of types of flowers so, for simplicity I'm modifying you question as "classify if this flower is jasmine or not"
Input you fed to the perceptron are: (features)
petal size, petal coloring, and leaf size
If you construct a perceptron (network) it will be:
$x_1, x_2, x_3$ are input neuron, $w_1, w_2, w_3$ are weights (mimicking the strength between biological neurons) and $Y$ is the output neuron.
You may have a question "
Why 3 nodes (or neurons) in input and 1 node in output layer?" so I'm explaining that below:
3 neuron in the input layer as we have 3 features(or inputs).
1 neuron in the output layer as we have 1 class (jasmine or not).
Input layer have 3 nodes which will carry you features to output neuron which will classify if its jasmine or not.
Unlike other classification methods, here we should teach the system (like a human brain learns when we first see a jasmine flower).
How will a computer come to know if its a jasmine flower? Answer is, by showing sample features (petal size, petal coloring, and leaf size) of many jasmine to it.
Note: In biology neural network, the neuron get activated when electric impulses exceeds a threshold. Mimicking it, we use a threshold function to active(output one) or not(output zero) a artificial neuron[ threshold value = $\theta$ ].
If the input is a jasmine flower, output neuron will output '1' else '0'.
$ Y = \begin{cases} \text{1,} &\quad\text{if $\sum_{}^{} w_ix_i>\theta $}\\ \text{0,} &\quad\text{else}\\ \end{cases}$
Note: When we consider $w_i$ and $x_i$ as vector ($W$ and $X$ respectively) its can be written as $Y=W^tX$ (you mentioned it as objective function).
We give sample features (petal size, petal coloring, and leaf size) of jasmine to input neurons $x_1,x_2,x_3$ respectively and tell its a jasmine(set output as one). Meanwhile, the weights $w_1, w_2, w_3$ take some random (initial) values and output $Y$ is found. As we have given a jasmine data to input neuron we expect the output to be one but as weights are taken in random, the output wont be one and an error $E$ is formed.
$E =|ExpectedOutput - ActualOutput|$
This error is propagated back, so that according to the error the weights $w_1, w_2, w_3$ can be updated such that next time error will be much lesser or zero.
This process is called
Training, the data used to train the perceptron is called Training data and the network after training is called trained network. The training is called back-propagation algorithm.
It learns from the training data we provide and this trained network can easily classify a new flower(can tell if its jasmine or not).
Read these : Biological neural networks, NPTEL Artificial intelligence, MIT Introduction: The Perceptron |
The ancient
Chinese mathematics ranged from 1400 BC (Shang dynasty) to 6th-century AD. In this period, a positional number system with the basis $10$ was invented, which was equally revolutionary with respect to calculations as the sexagesimal system used by the Sumers and the Babylonians. The decimal system is still used today worldwide. The Chinese have also invented the abacus, which was used to simplify every-day calculations. With respect to the basic arithmetic operations, the abacus was as helpful and powerful as today’s pocket calculators and is still being used in some parts of China.
The most important, purely mathematical book of the ancient Chinese mathematics is
Cheng Ch’iu Chien Suan-ching (the Nine Chapters on the Mathematical Art), which can be dated back to the dynasty Han, 220 AD. It contains “problems” from practical arithmetics which are solved using algebraic equations. Square roots, for instance $$751\frac 12=\sqrt{564752\frac 14}$$ are calculated as well as systems of linear equations are sometimes written as coefficient matrix, e.g.
$$\begin{array}{rcrcrcl}
3x&+&2y&+&z&=&39\\ 2x&+&3y&+&z&=&34\\ x&+&2y&+&3z&=&26\\ \end{array}$$
is written as $$\pmatrix{3& 2&1\\ 2& 3&1\\ 1&2&3}$$ and are solved using calculations applied to these matrices. The matrices contain sometimes
negative numbers, which appear for the first time worldwide in China.
Despite these revolutionary inventions, Chinese mathematics did not develop much further over centuries. The reason for this was the special education system, in which students had to memorize classic results. They passed the exams only if they were able to recite the old texts from memory.
| | | | created: 2016-12-04 20:47:38 | modified: 2019-07-04 08:26:48 | by:
bookofproofs | references: [641], [8244]
[641]
Govers, Timothy: “The Princeton Companion to Mathematics”, Princeton University Press, 2008
[8244]
Struik, D.J.: “Abriss der Geschichte der Mathematik”, Studienbücherei, 1976 |
If for each node of a tree, the longest path from it to a leaf node is no more than twice longer than the shortest one, the tree has a red-black coloring.
Here's an algorithm to figure out the color of any node
n
if n is root,
n.color = black
n.black-quota = height n / 2, rounded up.
else if n.parent is red,
n.color = black
n.black-quota = n.parent.black-quota.
else (n.parent is black)
if n.min-height < n.parent.black-quota, then
error "shortest path was too short"
else if n.min-height = n.parent.black-quota then
n.color = black
else (n.min-height > n.parent.black-quota)
n.color = red
either way,
n.black-quota = n.parent.black-quota - 1
Here
n.black-quota is the number of black nodes you expect to see going to a leaf, from node
n and
n.min-height is the distance to the nearest leaf.
For brevity of notation, let $b(n) = $
n.black-quota, $h(n) = $
n.height and $m(n) = $
n.min-height.
Theorem: Fix a binary tree $T$. If for every node $n \in T$, $h(n) \leq 2m(n)$ and for node $r = \text{root}(T)$, $b(r) \in [\frac{1}{2}h(r), m(r)]$ then $T$ has a red-black coloring with exactly $b(r)$ black nodes on every path from root to leaf.
Proof: Induction over $b(n)$.
Verify that all four trees of height one or two satisfy the theorem with $b(n) = 1$.
By definition of red-black tree, root is black. Let $n$ be a node with a black parent $p$ such that $b(p) \in [\frac{1}{2}h(p), m(p)]$. Then $b(n) = b(p) -1$, $h(n) = h(p)-1$ and $h(n) \geq m(n) \geq m(p)-1$.
Assume the theorem holds for all trees with root $r$, $b(r) < b(q)$.
If $b(n) = m(n)$, then $n$ can be red-black colored by the inductive assumption.
If $b(p) = \frac{1}{2}h(p)$ then $b(n) = \lceil \frac{1}{2}h(n) \rceil - 1$. $n$ does not satisfy the inductive assumption and thus must be red. Let $c$ be a child of $n$. $h(c) = h(p)-2$ and $b(c) = b(p)-1 = \frac{1}{2}h(p)-1 = \frac{1}{2}h(c)$. Then $c$ can be red-black colored by the inductive assumption.
Note that, by the same reasoning, if $b(n) \in (\frac{1}{2}h(r), m(r))$, then both $n$ and a child of $n$ satisfy the inductive assumption. Therefore $n$ could have any color. |
Let$\ \sigma(n)$ be the sum-of divisors function, with the divisors raised to$\ 1$. If the Riemann Hypothesis is false, Robin proved there are infinitely many counterexamples to the inequality$$\ \sigma(n)<e^\gamma n \log \log n.$$ There are 27 small counterexamples, but the conjecture is that it holds for every$\ n>5040$. Akbary and Friggstad showed the least counterexample to it must be a superabundant number, i.e. a number$\ a$ such that$\ \frac{\sigma(a)}{a}>\frac{\sigma(b)}{b}$ for all$\ b<a$. Now, it is a virtual certainty that if the inequality fails (for some$\ n>5040$), the maximum of the ratio$\ \frac{\sigma(n)}{n \log \log n}$ will be reached by a colossally abundant number, namely a number$\ c$ such that$\ \frac{\sigma(c)}{c^{1+\epsilon}}>\frac{\sigma(d)}{d^{1+\epsilon}}$, for all$\ d<c$ and for some$\ \epsilon>0$. Since it could lead me to something on the subject, what I'm asking is: if the inequality fails, will only a finite number of colossally abundant numbers satisfy Robin's inequality?
For the benefit of those who may not be familiar with all this:
In 1915 Ramanujan proved that if the Riemman Hypothesis is true, then for all sufficiently large $n$ we have an inequality on $\frac{\sigma(n)}{n}$, where $\sigma(n)$ is the sum of the divisors of the positive integer $n$. The inequality was $$\sigma(n)<e^\gamma n \ln\ln n$$ In 1984 Robin elaborated this to show that if there is a
single exception to this for $n>5040$ (the largest currently known exception and a "colossally abundant number" - hereafter a "CA"), then the Riemann Hypothesis is false.
Because of the importance of the RH, this attracted a good deal of attention. But 30 years later nothing seems to have come of it.
Obviously the most plausible candidates to break the inequality are numbers with lots of divisors. I believe, though I am weak on the history, that the concept, if not the name, of CA came from Ramanujan during his 1915 work.
To give a little perspective: few people are interested in CA per se. But vast numbers of people are interested in RH, even if only a tiny number do serious work on it (because of the risk to one's reputation). So the immediate interest of the inequality was that it provided another way, superficially at least totally different, to disprove the RH by computation. People had got fed up with results that the first zillion zeros were on the line, particularly when analysts quoted Littlewood's "Miscellany" on the Skewes' number (which is now a somewhat less compelling point :) ). So this was something else to try.
However, after 30 years nothing has so far come of that. In the meantime people have been working on CA as objects of interest in their own right.
The question is whether if the RH is false (so that the inequality fails - Robin's result was an iff type result), then only a finite number of CA will satisfy the Robin inequality.
[Added later - the precise question having been clarified]
If I had realised that would be the question, I would never have started to answer it! I had earlier understood it to be a quite different question. But there are a few points to be made.
I have never read Robin's paper - my interest is in RH, and I do not regard Robin's inequality as a useful way of tackling the RH (a judgment which of course is of zero interest to anyone else). So I am at a serious disadvantage - in not having read the paper and to compound that, I cannot immediately lay my hands on it.
It is fairly easy to show that if $a,b$ are coprime counterexamples to the inequality, then so is $ab$ provided $a,b$ are sufficiently big (which they would be). It is also fairly clear that unless something weird happens at huge values, counterexamples are likely to be CA. So it seems a fairly safe guess that if RH is false then there will be infinitely many CA not satisfying the Robin inequality.
But unfortunately the question asks for something much stronger than that, namely will
all but finitely many CA fail to satisfy it?
Short answer: good question; I have no idea and should delete this entire answer. But pending a little digging early in the coming week I will leave it here until a better answer comes.
In my defence, I would only say that I have only been using this site for less than 3 weeks. I have answered lots of daft questions, and had fun competing putting up answers fast. I failed to adjust adequately when this one came along. But it does illustrate the wisdom of the concept of clarifying the question with comments before writing Answers. I had started to do that, but got impatient when I could not immediately grasp the clarifications. That was entirely my fault. I apologise unreservedly. |
As a result of the conservation of charge the current is always constant in a circuit (continuity equation).
We know by Ohm’s law that the current density is proportional to the applied electric field $$\mathbf j=\sigma \mathbf E$$ where $\sigma$ is called the conductivity. Since we apply a constant voltage, the electric field is the same across the circuit. (Think of it as if the electron is nearer to the positive terminal it feels a greater attraction but also a lesser repulsion from the negative one). But if the conductivity changes the electric field need not to be the same as we see later. Due to scattering at atoms the electrons get slowed down making their speed constant (this makes up the resistance). This explains the proportionality in Ohm’s law. From this we get that the drift velocity is proportional to the applied electric field.
We can write the current through the circuit as$$I=nev_dA$$ where $n$ is the charge carrier density per volume, $e$ the elementary charge, $v_d$ the drift velocity and $A$ the cross-sectional area. As the current is the same in the circuit the drift velocity can indeed be different by changing the cross-sectional area $A$ and charge carrier density $n$. For example if you half the cross-sectional area the drift velocity doubles. You can think analogously of a water circuit. The analogon of a resistor (e.g. the bulb) is a narrowing. If the area is halved in the narrowing the water will flows double as fast than before. But after the narrowing it will flow with the same velocity as before it.
As mentioned above the electric field need not to stay the same across the circuit. If we revisit Ohm’s law (using absolute values instead of vectors) and by definition $j=\frac I A$, we get$$\frac I A =\sigma E.$$ For simplicity let’s assume the area $A$ is the same for the wires and the bulb. Since the bulb is a resistor it will have a smaller conductivity $\sigma$. (Note that the conductivity is a material constant and does not depend on the area or the length). But this means the electric field inside the bulb has to be greater because the current is the same across the circuit. This complies with Kirchoff’s first law stating that the voltage across a loop sums up to zero. Usually we treat the wires as perfect conductors with infinite conductivity and assume that the resistance can be modeled as one discrete element. Hence Ohm’s law gives us that all the battery’s voltage drops at the bulb. But Ohm’s law gives us more, if there are two or more resistors in a circuit, we can calculate the voltage drops based on their individual conductivities (at the smaller resistor will be the smaller electric field).
You are right, the electrons lose kinetic energy and convert it to heat in the resistor but in the same time the bigger electric field accelerates them again and compensate their energy loss by raising more the potential energy. From an energy point of view, you have a constant voltage source making sure there is always the same potential energy. This means if you lose heat at the bulb the voltage source will compensate for that.
There is another way to look how energy gets transferred from the battery to the resistor which comes handy when considering high frequencies. (I guess it is also more correct). We can look at the created electric and magnetic fields by the current. The Poynting vector $\mathbf S$ measures the energy flux and is defined as $$\mathbf S = \mathbf E \times \mathbf H.$$ If you do this for your circuit you will see that the Poynting vectors points from the battery to the lamp as in the picture below from Wikipedia. That is to say the energy flows from the battery to the bulb. Look also at this great post by @wbeaty.
Source: https://en.m.wikipedia.org/wiki/File:Poynting_vectors_of_DC_circuit.svg |
Does it make sense to say that the quantum field of a photon is exactly proportional to the photon's electromagnetic field?
\begin{align} \bar{\Psi} = \dfrac{\bar{E}+i\bar{B}}{\sqrt{\int (E^2+B^2)dV}} \end{align}
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
Does it make sense to say that the quantum field of a photon is exactly proportional to the photon's electromagnetic field?
\begin{align} \bar{\Psi} = \dfrac{\bar{E}+i\bar{B}}{\sqrt{\int (E^2+B^2)dV}} \end{align}
The quantized electromagnetic field gives rise to (all) photons. There is only one (quantum) field for all photons. So asking about the quantum field of "a photon" or a "photon's electromagnetic field" doesn't seem to make sense: The photon is a quanta of the field.
Quote from Willis Lamb, Nobel Prize in Physics, 1955.
there is no such thing as a photon.
I guess, what you meant was a Fock state. An eigenstate of the free electromagnetic field, here the first excited one associated to a well defined energy (states which then does not evolve in time) that amounts to a light quantum as said above by DrEntropy.
As an experimentalist I trust analysis by theoretical physicists and this is what I will use and quote selectively in my answer here:
When talking photons, we are talking quantum mechanics and elementary particles with their dual nature, sometimes manifesting classical particles sometimes probability waves, as as displayed by two slit experiment in this answer. The dots are the particle identification of the photon, the built up interference pattern the probability wave in space for the manifestation of a photon.
the probabilities predicted from quantum mechanics should be interpreted exactly in the same way as probabilities predicted from classical statistical physics. The only difference is that quantum mechanics implies that the "exact truth" about the system doesn't exist even in principle. However, in practice, you don't care about it because you can't know the coordinate and positions of many gas molecules in a bottle, anyway.
.........
For a wave from a light bulb where there are different frequencies and statistically :
Imagine that they do differ and you want to calculate the average value of the electric field E⃗ at some point in space, away from the light bulb. The electric field may be rewritten as some combination of creation and annihilation operators for photons in all conceivable states, with various coefficients. And by symmetry, or because the many photons contribute randomly, you get zero. So even though we are surely imagining – and we may measure – nonzero values of the electric field away from the light bulb at some moments and locations, the statistical expectation is zero and the fluctuations of the electric field are due to the randomness of the emission processes.
For a coherent source, like a laser, where a single frequency with a width is produced coherently then the classical potential and the wave function of the photons are directly related:
I don't want to scare you by the indices but the wave function of a single photon mathematically looks like the (complexified) classical electromagnetic potential A⃗ (x,y,z), with some extra subtleties.
I think this answers that the formula you propose is wrong. It is the electromagnetic potential that enters the wave function.
If you are interested really you should read carefully the blog entry and then search further reading. |
It is often claimed that spin is a purely quantum property with no classical analogue. However (as was very recently pointed out to me), there is a classical analogue to spin whose action is given (in M. Stone, "Supersymmetry and the quantum mechanics of spin", Nucl. Phys. B 314 (1986), p. 560) by $$S = J\left[\int_\Gamma\vec{n}\cdot\vec{B}\,\mathrm{d}t + \int_\Omega\vec{n}\cdot\left(\frac{\partial\vec{n}}{\partial t}\wedge\frac{\partial\vec{n}}{\partial \tau}\,\mathrm{d}t\,\mathrm{d}\tau\right)\right],$$ where $\Omega$ is a region on the unit sphere bounded by the closed loop $\Gamma$. Stone goes on to derive the algebra satisfied by the quantities $J_i \equiv J\,n_i$ as the standard angular momentum algebra given by $$\{J_i,J_j\}_{PB} = \epsilon_{ijk}J_k,$$ where $\{\cdot,\cdot\}_{PB}$ denotes the Poisson bracket.
However, while this all makes a good deal of sense, and the connection to spin in quantum theories is evident through path integration over SU(2), I have to wonder: why isn't this action studied in most introductions to classical mechanics? Some authors claim it's because the dynamics generated by this action can only be understood through the use of symplectic forms, which is a rather advanced formulation of classical mechanics. But this answer isn't very satisfying to me. Instead, I have the following set of related questions:
1.) Are there even any classical systems which can be described by such an action? Alternatively, since the above action has degrees of freedom defined on the surface of the unit sphere $S^2$, are there classical systems whose dynamical degrees of freedom are simply on $S^2$?
2.) If so, what are they, and do they have direct quantum analogues (in the same way that, say, the harmonic oscillator exists in both classical and quantum mechanics, and the classical and quantum Hamiltonians are directly related to one another)?
3.) If not, why does spin appear as a dynamical variable in quantum mechanics but not in classical mechanics, despite the fact that both theories can accommodate it? If classical mechanics is simply a limiting case of quantum mechanics, what about this limiting process causes spin to disappear in classical systems? |
Question:
In a double-slit experiment, the distance between the slits is {eq}0.2\ mm {/eq}, and the distance to the screen is {eq}150\ cm {/eq}. What wavelength (in nm) is needed to have the intensity at a point {eq}1\ mm {/eq} from the central maximum on the screen be {eq}80\% {/eq} of the maximum intensity?
Double Slit Experiment:
Thomas Young proposed the double-slit experiment to study the phenomenon of interference of light. It was observed that there were alternate bright and dark fringes that were formed on a screen when a monochromatic light was passed through a pair of slits whose width was equivalent to the wavelength of the light used. The intensity of the fringes kept on decreasing from the central maximum.
Answer and Explanation:
Given:
Width of slits {eq}d = 0.2 mm = 0.2 \times 10^{-3} m {/eq} Distance of screen D = 150 cm = 1.5 m
In the ideal case, the intensity of each bright fringe is equal to the central maximum and they are equally spaced from each other.
But when it comes to reality, the intensity of the consecutive bright fringes varies and keeps on decreasing. If the intentiay of the sentral maximum is given by {eq}I_{max} {/eq}, then the intensity of a bright fringe at a distance
y from the central maximum is given as:
{eq}I = I_{max} cos^2 (\dfrac{\pi d}{\lambda D} y) {/eq}
Given the intenisty at a distance {eq}y = 1 mm = 10^{-3} m {/eq} is 80% of the central maxiumum, then:
{eq}0.8 I_{max} = I_{max} cos^2 (\dfrac{\pi d}{\lambda D} y) {/eq}
{eq}0.8 =cos^2 (\dfrac{\pi d}{\lambda D} y) {/eq}
{eq}\sqrt{0.8} =cos (\dfrac{\pi d}{\lambda D} y) {/eq}
{eq}0.894 =cos (\dfrac{\pi d}{\lambda D} y) {/eq}
{eq}cos^{-1}(0.894 )= (\dfrac{\pi d}{\lambda D} y) {/eq}
{eq}0.465 = (\dfrac{\pi d}{\lambda D} y) {/eq}
{eq}\lambda = (\dfrac{\pi d}{(0.465) D} y) = (\dfrac{\pi (0.2 \times 10^{-3})}{(0.465) (1.5)} \times 10^{-3}) {/eq}
{eq}\lambda = 900 \times 10^{-9} m = 900 nm {/eq}
Become a member and unlock all Study Answers
Try it risk-free for 30 daysTry it risk-free
Ask a question
Our experts can answer your tough homework and study questions.Ask a question Ask a question
Search Answers Learn more about this topic:
from CLEP Natural Sciences: Study Guide & Test PrepChapter 8 / Lesson 16 |
As open problems of the month, we state here the two questions discussed in our recent guest post by Oded Goldreich:
Open Problem 1 (obtaining a one-sided error reduction): The reduction from testing affinity of a subspace to that of testing affinity of a function has two-sided error probability. We wonder whether a one-sided error reduction of similar complexity can be found.
Note that this will yield a one-sided error tester for affinity, which we believe is not known. Actually, we would also welcome a two-sided error reduction that, when combined with the linearity tester, yields a tester of complexity \(O(1/\epsilon)\) rather than \(\tilde{O}(1/\epsilon)\).
Turning to the task of testing monomials, we recall that the solution of [PRS02] is based on employing self-correction to the following test that refers to the case that \(H=\{x:AX=b\}\) is an \((\ell-k)\)-dimensional affine space. Basically, the test selects a random \(x\in H\) and a random \(y\in\{0,1\}^\ell\), and checks whether it holds that \(y\in H\) if and only if \(x\wedge y \in H\), where \(\wedge\) denotes the bit-by-bit product of vectors. It is painfully shown in [5] that
if \(H\) is not a translation by \(1^\ell\) of an axis-aligned linear space, then the check fails with probability \(\Omega(2^{-k})\). Furthermore, it is shown that for a constant fraction of the \(x\)’s (in \(H\)), the check fails on a \(\Omega(2^{-k})\) fraction of the \(y\)’s (in \(\{0,1\}^\ell\)). This strengthening is important, since selecting \(x\in H\) uniformly requires \(\Theta(2^k)\) trials. Recall that proving the foregoing assertion for \(k=1\) is quite easy (cf. [5]), which leads us to ask
Open Problem 2 (can a simpler proof be found for the case of \(k>1\)): Is there a relatively simple reduction of the foregoing claim for general \(k\) to the special case of \(k=1\)?
[PRS02] M. Parnas, D. Ron, and A. Samorodnitsky. Testing Basic Boolean Formulae.
SIAM Journal on Disc. Math. and Alg., Vol. 16 (1), pages 20–46, 2002.
[We are delighted to have this month a blog post by Oded Goldreich, on a reduction from testing affinity of subspaces to testing linearity of Boolean functions. Oded also gave us open problems directly related to this post.]
We consider the task of testing whether a Boolean function \(f\colon \{0,1\}^\ell\to\{0,1\}\) is the indicator function of an \((\ell-k)\)-dimensional space. An optimal tester for this property was presented by Parnas, Ron, and Samorodnitsky (
SIDMA, 2002), by mimicking the celebrated linearity tester (of Blum, Luby and Rubinfeld, JCSS, 1993) and its analysis. We show that this task can be reduced to testing the linearity of a related function \(g\colon\{0,1\}^\ell\to\{0,1\}^k\), yielding an almost optimal tester.
Continue reading
While March 2016 has been rather low in terms of property testing, we did see a new paper appear:
A Note on Tolerant Testing with One-Sided Error, by Roei Tell (ECCC). A natural generalization of property testing is that of tolerant testing, as introduced by Parnas, Ron, and Rubinfeld [PRR06]: where the tester still must reject all objects that are far from satisfying the property, but now also has to accept those that are sufficiently close (all that with constant probability). In this work is considered the question of one-sidedness of tolerant testers: namely, is it possible to only err on the farness side, but accept close output with probability one? As it turns out, it is not — the author shows that any such one-sided tolerant tester, for basically any property of interest, must essentially query the whole input…
Universal Locally Testable Codes, by Oded Goldreich and Tom Gur (ECCC). In this work, the authors introduce and initiate the study of an extension of locally testable codes they name universal locally testable codes (universal-LTC). At a high-level, a universal-LTC (with regard to a family of functions \(\cal F\)) is a locally testable code \(C\) “for which the restrictions (subcodes) of \(C\) by functions in \(\cal F\) are also locally testable.” In other terms, one is then able to test efficiently, given an encoded string \(w\), if (i) \(w=C(x)\) for some \(x\); but also, for any \(f\in \cal F\), if (ii) \(w=C(x)\) for some \(x\) that satisfies \(f(x)=1\).
Edit (04/06): added the work of Goldreich and Gur, which was overlooked in our first version of the article. |
The brachistochrone problem is a very famous problem in the history of physics which was first solved by an excellent mathematician named Jean Bernoulli. He posed this problem as a challenge to the greatest mathematicians of Europe during the period of the Renaissance. He stated the problem as such:
We are given two fixed points in a vertical plane. A particle starts from rest at one of the points and travels to the other under its own weight. Find the path that the particle must follow in order to reach its destination in the briefest time.
In other words, if a particle's initial position is \((x_1,y_1)\) and it moves to a final position \((x_2,y_2)\) under only the action of gravity, the problem is to find the particular path \(x(y)\) on the plane which, if the particle moved along that path, it would get to its final position fastest and in the least time. Like many problems in physics, this one involves idealizations of the physical system under consideration\(^1\): in this problem, the object is approximated as a particle which is subjected only to the action of gravity. In all theoretical schemes in the past which were used to solve the brachistochrone problem, all of them necessarily had to obey fundamental laws such as the conservation of energy. The conservation of energy implies that (see below), no matter which path an object takes, the total velocity \(v\) of the particle must be a function of height \(y\) and of the form \(v(y)=\sqrt{2gy}\). The change in the particle’s velocity depends on only its change in height.
$$\frac{1}{2}mv^2=mgy⇒v(y)=\sqrt{2gy}.$$
Bernoulli used a very sophisticated procedure to solve the brachistocrone problem. He new from Fermat's principle that it is a law of nature that light always moves along the path of least time and that light passes through materials of varying indices of refraction in accordance with Snell's law. He imagined stacking infinitely many, very thin sections of materials on top of each other where the refractive index varies smoothly from one layer to another. The velocity of a light beam passing through such a stack of materials, from Snell's law, would also vary smoothly as a function of height and can be written as \(\vec{c}(y)\). This light beam would trace out a special path which is the path of least time for any object, not just light. This very creative way of thinking allowed Bernoulli to solve the brachistocrone problem.
But in this section, we'll use a much less sophisticated method to solve the brachistocrone problem than Bernoulli, Leibnez, Newton, and others. The goal of all the proceeding math of this section will be to express the quantity which we want to minimize (in this problem, the time \(t\)) as a functional which takes the form \(S(q_j(x),q_j’(x),x)=t(x(y),x'(y),x)\). Then after that there will be a little more math, the goal of which will be to express the integrand as a functional of the form \(F(q_j,q_j’,x)\). This will allow us to calculate all of the derivatives (in this problem, there are three of them) in the Euler-Lagrange equation; these simplifications will allow us to use the Euler-Lagrange equation (equations of motion) to solve for the path \(x(y)\) which minimizes the functional (which is the time \(t\)).
Let's start out with our first goal of expressing the time as a functional. As the particle falls under the action of gravity, it's velocity is constantly changing. However, as the particle falls by a very, very small amount and traverses an infinitesimally small displacement \(dS\), it's velocity is pretty much constant and we can write \(dS=vdt\). Since we eventually want to be able to express the time as a functional, let's rearrange this equation in terms of time: \(dt=\frac{dS}{v}\). Already, we have a rough idea of where we're trying to go with the math. First of all, we have a hunch that we'll probably have to take an integral since we want to express time as a functional of the form in Equation (3) from the section on the derivation of the Euler-Lagrange equation. So, let's do that:
$$\int_{t_1}^{t_2}dt=t=\int{\frac{dS}{v}}.\tag{1}$$
So we're one step closer to getting an equation which looks like Equation (3) from the derivation section, but the problem is that our integrand doesn't really look quite like the integrand in Equation (3). To achieve this goal of getting our integrand to look like the integrand in Equation (3), we'll need to express the velocity \(v\) and the displacement \(dS\) in terms of \(x(y)\), \(x'(y)\), and \(y\). To achieve that goal for the velocity \(v\), we can just use the conservation of energy as we did earlier to get \(v(y)=\sqrt{2gy}\). So if we substitute this equation into Equation (1), we'll get
$$\int_{t_1}^{t_2}dt=t=\int{\frac{dS}{\sqrt{2gy}}};\tag{2}$$
and you can see that we have now expressed the time \(t\) in terms of a \(y\) term. Even before using the Pythagorean theorem to rewrite \(dS\), by just going back to our derivation section (towards the beginning) you'll see that it will allow us to pick up a \(x'(y)\) term. But that aside, from the Pythagorean theorem, we have \(dS=\sqrt{1+\frac{dy}{dx}^2}dx\). If we substitute this into Equation (2), we get
$$t(x(y),x'(y),y)=∫\sqrt{1+\frac{dy}{dx}^2}(2gy)^{-1/2}dy=\frac{1}{\sqrt{2g}}\int{\sqrt{1+\biggl(\frac{dy}{dx}\biggl)^2y^{-1}}}dy.\tag{3}$$
So, you can see that both the time and the integrand in Equation (3) are functionals of the form we want. Thus, we can apply the analysis we used in our derivation to solve this problem. As a reminder—and I'm sure that by now this might seem like a broken record—the Euler-Lagrange equation is the condition which satisfies our functional being minimized. At this point, we basically just want to do a bunch of algebra to derive the motion \(x(y)\) which satisfies the Euler-Lagrange equation—this will give us the path of least time. The first step to doing this will be to evaluate all of the derivatives in the Euler-Lagrange equation as shown in the video and below:
$$\frac{∂L}{∂x}=\frac{∂}{∂x}\biggl(\sqrt{1+\biggl(\frac{dx}{dy}\biggl)^2}\text{ }y^{-1/2}\biggl)=0$$
$$\frac{d}{dy}\frac{∂}{∂x'}\sqrt{\frac{1+\frac{dx}{dy}^2}{y}}=0.$$
Let's evaluate the partial derivative with respect to \(x'\) to simplify the above equation to
$$\frac{d}{dy}\frac{(1+(x')^2)^{-1/2}}{\sqrt{y}}x'=0.$$
If the derivative of something is zero then that something must be constant and we get
$$\frac{(1+(x')^2)^{-1/2}}{\sqrt{y}}x'=C.$$
We shall let \(C=\frac{1}{\sqrt{2a}}\) and we'll see later that the motivation for doing this is to be able make trigonometric substitutions to simplify our equation. By substituting for \(C\) and doing some algebra, we get a series of simplifications as show in the video and below:
$$\frac{x'^2}{y(1+x'^2)}=\frac{1}{2a}$$
$$2ax'^2=y(1+x'^2)$$
$$(2a-y)x'^2=y$$
$$x'^2=\frac{y}{2a-y}$$
$$x'=\frac{y}{2a-y}.$$
Our goal this entire time has just been to isolate the solution \(x(y)\), and we see that we are starting to get very close. All we have to do is "undo the derivative" on the left-hand side, so to speak, and we can do that by taking the anti-derivative, or the integral, on both sides of the equation:
$$x(y)=\int{\frac{y}{2a-y}}dy.$$
As intimidating as the integral above might look, solving it essentially just boils down to making a few trigonometric substitutions and a lot of algebra. If you are unfamiliar with the trigonometric substitutions used in the video above, I suggest refreshing your trigonometry skills by checking out the Khan Academy's videos on trigonometry. I'll omit the substitutions and algebra used to solve the integral since all of those steps are shown in the video; but after doing all the necessary trigonometry and algebra, what you end up with are the equations below:
$$x(θ)=a(θ-cosθ)$$
$$y(θ)=a(θ-sinθ)+C.$$
As described in the video above, the constant \(C\) can be solved for by setting \(x\) and \(y\) equal to zero. Doing so, we find that \(C=0\). The constant \(a\) can also be solved for by substituting the appropriate conditions; but we won't worry about this and we'll just be concerned with what the graph of these parametric equations look like. The equations above simplify to
$$x(θ)=a(θ-cosθ)$$
$$y(θ)=a(θ-sinθ).$$
If we graph these equations, the curve we'll get is a cycloid. If a particle rolls down a frictionless (remember that we said the only force that can act on it is gravity) surface whose shape is that of a cycloid, it will fall from any arbitrary point along the surface to any other arbitrary point along the surface (at a lower height) in the least amount of time.
This article is licensed under a CC BY-NC-SA 4.0 license.
References
1. The Kaizen Effect. "Lagrangian Mechanics - Lesson 3: The Brachistochrone Problem". Online video clip. YouTube. YouTube, 21 May 2016. Web. 01 June 2016.
2. http://www.storyofmathematics.com/20th.html
Notes
1. 1. It is always necessary that some combination of certain law, theories, approximations, and idealizations are assumed to be true in order for meaningful results to be mathematically deduced. The Euler-Lagrange equation is a mathematical deduction which is based upon the assumption that very fundamental laws are true. The fact that it is based on fundamental laws which are (or at least seem to be) universal and not approximations and theoretical schemes which are limited to very special circumstances means that the Euler-Lagrange equation is universal and applicable to nearly all situations (as long as these fundamental laws do not break down. One might think that the inapplicability of this equation to situations involving non-conservative forces limits their generality; while this is in fact the case in practice, it is not in principle since on the most fundamental level, there are no non-conservative forces in nature. For example, on the most fundamental level, friction (which is treated as non-conservative on macroscopic scales and in practice) is just the result of many atoms and molecules influencing each other via electromagnetic interactions (which are conservative forces). |
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
This question already has an answer here:
Let $\{x_n\}$ be a bounded sequence such that every convergent subsequence converges to $L$. Prove that $$\lim_{n\to\infty}x_n = L.$$
The following is my proof. Please let me know what you think.
Prove by contradiction: ($A \wedge \lnot B$)
Let {$x_n$} be bounded, and every convergent sub-sequence converges to $L$.
Assume that $$\lim_{n\to\infty}x_n\ne L$$
Then there exists an $\epsilon>0$ such that $|x_n - L|\ge \epsilon$ for infinitely many n.
Now, there exists a sub-sequence $\{x_{n_{k}}\}$ such that $|x_{n_{k}} - L| \ge\epsilon$.
By Bolzano-Weierstrass Theorem $x{_{n{_k}}}$ has a convergent subsequence $x_{n_{k{_{l}}}}$ that does not converge to $L$.
$x_{n_{k_{l}}}$ is also a sub-sequence of the original sequence $x_n$, then this is a contradiction since every convergent sub-sequence of $x_n$ converges to $L$.
Hence the assumption is wrong. So $\lim_{n\to\infty}x_n = L.$ |
April has been kind to us, providing us with a trio of new papers in sublinear algorithms. Also, in case you missed them, be sure to check out Oded Goldreich’s guest post and the associated open problem.
Sparse Fourier Transform in Any Constant Dimension with Nearly-Optimal Sample Complexity in Sublinear Time, by Michael Kapralov (ArXiv). There is a rich line of work focused on going beyond the celebrated Fast Fourier Transform algorithm by exploiting sparsity in the signal’s Fourier representation. While the one-dimensional case is very well understood, results in higher dimensions have either been lacking with respect to time complexity, sample complexity, or the approximation guarantee. This work is the first to give an algorithm which runs in sublinear time, has near-optimal sample complexity, and provides an arbitrarily accurate approximation guarantee for the multivariate case.
Sublinear Time Estimation of Degree Distribution Moments: The Arboricity Connection, by Talya Eden, Dana Ron, C. Seshadhri (ArXiv). This work revisits the problem of approximating the moments of the degree distribution of a graph. Prior work has shown that the \(s\)-th moment of the degree distribution can be computed with \(\tilde O(n^{1 – 1/(s+1)})\) queries, which is optimal up to \(\mathrm{poly}(\log n)\) factors. This work provides a new algorithm which requires only \(\tilde O(n^{1 – 1/s})\) queries on graphs of bounded arboricity, while still matching previous near-optimal bounds in the worst case. Impressively, the algorithm is incredibly simple, and can be stated in just a few lines of pseudocode.
A Local Algorithm for Constructing Spanners in Minor-Free Graphs, by Reut Levi, Dana Ron, Ronitt Rubinfeld (ArXiv). This paper addresses the problem of locally constructing a sparse spanning graph. For an edge \(e\) in a graph \(G\), one must determine whether or not \(e\) is in a sparse spanning graph \(G’\) in a manner that is consistent with previous answers. In general graphs, the complexity of this problem is \(\Omega(\sqrt{n})\), leading to the study of restricted families of graphs. In minor-free graphs (which include, for example, planar graphs), existing algorithms require \((d/\varepsilon)^{\mathrm{poly}(h)\log(1/\varepsilon)}\) queries and induce a stretch of \(\mathrm{poly}(d, h, 1/\varepsilon)\), where \(d\) is the maximum degree, \(h\) is the size of the minor, and \(G’\) is of size at most \((1 + \varepsilon)n\). The algorithm in this paper shows the complexity of this problem to be polynomial, requiring only \(\mathrm{poly}(d, h, 1/\varepsilon)\) queries and inducing a stretch of \(\tilde O(h \log (d)/\varepsilon)\). |
So there are a bunch of "pictures" (that's the technical term!) of quantum mechanics, agreeing in the broad perspective that:
There is some vector space $\{|\phi\rangle\}$ over the field $\mathbb C$ and its canonical dual space $\{\langle\phi|\},$ such that the dual operation $\mathcal D$ maps $$\mathcal D\Big(a |\alpha\rangle + b |\beta\rangle\Big) = \langle \alpha|a^* + \langle\beta|b^*$$ and there is an inverse mapping the other way and so on; we usually write this dualizing operation with a superscript $\dagger$ so that $\big(c~|\alpha\rangle\langle\beta|\big)^\dagger = c^* |\beta\rangle\langle\alpha|.$ Observable quantities are represented by Hermitian operators $\hat O^\dagger = \hat O,$ or in other words you have expressions like $$\mathcal D\Big(\hat O |\phi\rangle\Big)~|\psi\rangle = \langle\phi| \hat O|\psi\rangle,$$or what mathematicians will sometimes write $\langle \hat O\phi,~\psi\rangle = \langle \phi,~\hat O\psi\rangle.$ The point is that they are their own conjugate transpose, in the sense that they play nice with this dualizing operation. The central prediction of QM is: "you observe the eigenvalues of the Hermitian operators, but we only predict the averages of these eigenvalues over many measurements. The average always takes the form $\langle O \rangle = \langle \psi|\hat O|\psi\rangle,$ where $|\psi\rangle$ is a vector we regard as the state of the system."
In one of these pictures in particular, the
Schrödinger picture, all of the operators $\hat p$ and $\hat x$ and so on are generally formally independent of time, and the state $|\psi\rangle$ changes explicitly with time according to the Schrödinger equation, $$i \hbar |\partial_t \Psi(t)\rangle = \hat H |\Psi(t)\rangle,$$ where $\hat H$ is an observable for the total energy in the system. Of course we could still define time-dependent observables like $\hat O = \hat x ~ \cos(\omega t) + \hat p/\hbar ~ \sin(\omega t)$ if we wanted, and then we would have something that we'd call maybe $d \hat O\over d t,$ but the basic point is that the theory is made out of basic things which are not fundamentally time-dependent, and you can do the time dependence if you want to. So $\hat p = -i\hbar \partial_x$ as an operator, it does not change over time.
One nevertheless gets that the actual
observable change in the average value is given by the formula you gave, which includes the possibility of explicit time dependence. Explicit time dependence is unusual in the Schrödinger picture, but we can handle it by saying $$\frac{d\langle A\rangle}{dt} = \frac{i}{\hbar} \langle [H, A]\rangle + \langle \frac{dA}{dt} \rangle,$$ where all of the stuff inside brackets is fundamentally some operator expression first and foremost, so $\langle dA/dt\rangle$ means, "first figure out what operator $d\hat A/dt$ is, then its average appears above."
And then, you have all of the other pictures. It turns out that we can think about solving the equation $i\hbar |\partial_t \psi(t)\rangle = \hat h |\psi(t)\rangle$ for an arbitrary Hermitian $\hat h$, and we get that $\psi(t) = \hat u(t) |\psi(0)\rangle$ for some "unitary operator" $\hat u(t)$, meaning that $\hat u \hat u^\dagger = \hat u^\dagger \hat u = 1.$ One particular one of these, $\hat U(t)$, corresponds to the case where $\hat h = \hat H.$
We can insert these into the expectation value given by the Schrödinger picture to do a sort of
quantum coordinate transform,$$\langle A \rangle = \langle \psi_0|\hat U^\dagger \hat u ~ \hat u^\dagger \hat A \hat u ~ \hat u^\dagger \hat U |\psi_0\rangle.$$
The point is that now instead of $|\psi\rangle =\hat U |\psi_0\rangle$ we think about $|\psi'\rangle = \hat u^\dagger \hat U |\psi_0\rangle$ which we can derive evolves according to $$i\hbar |\partial_t \psi'\rangle = (\hat H' - \hat h') |\psi'\rangle.$$ Above you can also see primes on the operators; see now it is also more typical for operators to have explicit time dependence, since we are also replacing $\hat A$ with $\hat A' = \hat u^\dagger \hat A \hat u$ and finding $$i \hbar \frac{d\hat A'}{dt} = -\hat u^\dagger \hat h \hat A \hat u + \hat u^\dagger \hat A \hat h \hat u = [\hat A',~\hat h'].$$In the most extreme form of this, the
Heisenberg picture, we choose $\hat h = \hat H$ so that the state does not evolve at all and remains at $|\psi_0\rangle$ in perpetuity. Instead all of the operators evolve in time. This was the basis for the original "matrix mechanics" form of quantum mechanics before Schrödinger discovered his wave equation.
It is also very common to have "interaction pictures" where we divide $\hat H$ into a nice easy noninteracting part $\hat H_0$ plus whatever complications exist in the interactions $\hat H_I.$ Then we choose $\hat h = \hat H_0$ which usually just throws some $e^{i \omega t}$ terms on all of the operators we're analyzing, and then we can make various approximations for the remaining dynamics now that the easy part is "out of the way." |
Many optimization procedures are based upon successive approximation: they start with a value of $x$, and try to successively refine $x$ to move it closer and closer to the optimum. For instance, hill climbing and gradient ascent both have this structure.
You can use this structure to solve your problem. Let $x_t$ be the maximum for $f_t$. Then you can use $x_t$ as the starting point (the initial approximation) when looking for the maximum for $f_{t+1}$. Now iterate, using the maximum for the previous time period as the starting point for hill climbing / gradient ascent / .... in the next time period. If the maximum for $f_{t+1}$ is close to the maximum for $f_t$, this might work reasonably well.
There are many different possible ways to instantiate this basic approach. Here are some possible examples of how this might look:
With gradient ascent, you evaluate $f_{t+1}$ three times at three points near $x_t$: e.g., at $x_t$, $x_t+(\varepsilon,0)$, and $x_t+(0,\varepsilon)$. You use the result to estimate the gradient around $x_t$, and then move in the direction of the gradient.
As a possible optimization/variation, you might align the axes in the direction of the gradient for $f_t$. For instance, if the gradient was maximum in the angle $\theta$ for $f_t$, you might evaluate $f_{t+1}$ at the three points $x_t$, $x_t+(\varepsilon \cos \theta, \varepsilon \sin \theta)$, $x_t + (\varepsilon \cos (\theta+\pi/2), \varepsilon \sin (\theta+\pi/2))$, then use those values to estimate the gradient.
As another possible optimization, you could follow this with a line search. Suppose $\delta$ is the direction that maximizes the gradient. Then you could evaluate $f_{t+1}$ at $x_t,x_t+\delta,x_t+2\delta,x_t+3\delta,x_t+4\delta,\dots$, stopping at the first one that stops providing any improvement. Or, you could evaluate $f_{t+1}$ at $x_t,x_t+\delta,x_t+2\delta,x_t+4\delta,x_t+8\delta,\dots$. You could even follow this with an iteration or two of binary search to find the best stopping point, though that might not be worthwhile.
If you are trying hill-climbing, you will do something simple like this: pick a random point $x'$ in the neighborhood of $x_t$, and if $f_{t+1}(x') > f_{t+1}(x_t)$, then move to $x'$. You can repeat this a few times.
Of course, you can combine these methods. There may be many other variations you could try. I would suggest that you read about methods for optimization, brainstorm other similar methods, and then try them all to see which work best.
Also, you might want to try a simulation: choose $f_t$ that you believe to be representative and where you can evaluate them as many times as desired to find the true maximum and their shape, and then simulate each of these strategies to see how well they work. You might also want to visualize the shape of the $f_t$ and what these strategies are doing (what points they are visiting), in case that helps you refine them or choose suitable constants that are appropriate for your particular application domain. |
I was wondering: if I were given a matrix $A$, I could calculate its Jordan canonical form. If I considered then $A^2$, I could say that if $\lambda$ is an eigenvalue of $A$, then $\lambda^2$ is an eigenvalue of $A^2$. However I couldn't say anything about the Jordan canonical form of $A^2$. Are there any connections between the two canonical forms? Given the one of $A$, could I guess the one of $A^2$, or at least have some hints except the eigenvalues? Thanks for the help.
There is indeed a formula.
To begin with, if the JCF of $A$ is $$\begin{pmatrix} J(\lambda_1,m_1)&&0\\&\ddots&\\0&&J(\lambda_t,m_t)\end{pmatrix}$$ then $A^2$ is obviously similar to $$\begin{pmatrix} (J(\lambda_1,m_1))^2&&0\\&\ddots&\\0&&(J(\lambda_t,m_t))^2\end{pmatrix}$$
The power of a Jordan block is rarely a Jordan block. But, if the eigenvalue is $\ne 0$, it is always
similar to a Jordan block. Specifically, if $J(\lambda, m)$ is a Jordan block of size $m$ and eigenvalue $\lambda$, then $$(J(\lambda,m))^s\sim\begin{cases} J(\lambda^n,m)&\text{if }\lambda\ne0\\ \begin{pmatrix}J(0,m-s+1) &0\\ 0 &\bf{0}_{(s-1)\times (s-1)}\end{pmatrix}&\text{if }\lambda=0\text{ and } m\ge s-1\\ \bf0_{m\times m}&\text{if }\lambda=0\text{ and } m< s-1\end{cases}$$
So, the
visual procedure for $s=2$ is:
the non-nilpotent blocks remain unchanged, but the eigenvalues are squared.
for each nilpotent block of size $m\ge3$, you put there a nilpotent block of size $m-1$
the rest goes into a $\ker A$.
Added reference: A full proof of the lemma about the JCF of $(J(\lambda,m))^s$ can be found in Oscar Cunningham's answer to this older question.
The Jordan canonical form of a matrix would have the eigen values on its diagonal along with some additional 1's in case the some of the eigen values have algebraic multiplicity $>1$. Check this Wikipedia page for more info on this
Also, as you correctly pointed out, the eigen values of $A^{2}$ would be squares of the eigen values of $A$. So the Jordan normal form of $A^{2}$ would be an "almost diagonal" matrix (similar to what we had for the Jordan normal form of $A$), with the squares of the eigen values in place of the original ones. Note that the algebraic multiplicity of the eigen values would carry forward unchanged from $A$ to $A^{2}$. |
Can't get integration right
05-31-2016, 12:16 PM (This post was last modified: 06-01-2016 04:23 PM by Marcio.)
Post: #1
Can't get integration right
Hello all,
Would you be king enough as to point out what I am doing wrong with this very simple integration?
\[t = \int_0^{0.515}\frac{1}{0.000273\times exp\left[16306\times\left(\frac{1}{535}-\frac{1}{535+90.45x}\right)\right](1-x)} dx\]
The correct answer is \(833\) but I keep getting \(838.867\pm2.38E-7\) with FIX 9.
This is the code I am using on the 15C:
R0 = 0.000273
R1 = 16306
R2 = 535
R3 = 90.45
Code:
LBL 0
Very much appreciated.
05-31-2016, 12:36 PM
Post: #2
RE: Can't get integration right
You're not doing anything wrong, as that is the correct answer for the problem as stated.
Perhaps a digit is wrong somewhere in the constants?
Werner
05-31-2016, 01:39 PM (This post was last modified: 05-31-2016 01:46 PM by Marcio.)
Post: #3
RE: Can't get integration right
That is right. Just found out 833 was a "close" approximation and 838.87 is actually the correct answer.
Thank you.
User(s) browsing this thread: 1 Guest(s) |
Coupling is the way biology makes reactions that 'want' to happen push forward desirable reactions that
don't want to happen. Coupling is achieved through the action of enzymes — but in a subtle way. An enzyme can increase the rate constant of a reaction. However, it cannot change the ratio of forward to reverse rate constants, since that is fixed by the difference of free energies, as we saw in Part 2:
$$ \displaystyle{ \frac{\alpha_\to}{\alpha_\leftarrow} = e^{-\Delta {G^\circ}/RT} } $$
and the presence of an enzyme does not change this.
Indeed, if an enzyme
could change this ratio, there would be no need for coupling! For example, increasing the ratio \(\alpha_\rightarrow/\alpha_\leftarrow\) in the reaction
$$ \mathrm{X} + \mathrm{Y} \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} \mathrm{XY} $$
would favor the formation of XY, as desired. But this option is not available.
Instead, to achieve coupling, the cell uses enyzmes to greatly increase
both the forward and reverse rate constants for some reactions while leaving those for others unchanged!
Let's see how this works. In our example, the cell is trying to couple ATP hydrolysis to the formation of the molecule XY from two smaller parts X and Y. These reactions don't help do that:
$$ \begin{array}{cclc} \mathrm{X} + \mathrm{Y} & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} & \qquad (1) \\ \\ \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} & \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} & \qquad (2) \end{array} $$
but these do:
$$ \begin{array}{cclc} \mathrm{X} + \mathrm{ATP} & \mathrel{\substack{\gamma_{\rightarrow} \\\longleftrightarrow\\ \gamma_{\leftarrow}}} & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}} & (3) \\ \\ \mathrm{XP}_{\mathrm{i}} +\mathrm{Y} & \mathrel{\substack{\delta_{\rightarrow} \\\longleftrightarrow\\ \delta_{\leftarrow}}} & \mathrm{XY} + \mathrm{P}_{\mathrm{i}} & (4) \end{array} $$
So, the cell uses enzymes to make the rate constants for reactions (3) and (4) much bigger than for (1) and (2). In this situation we can ignore reactions (1) and (2) and still have a good approximate description of the dynamics, at least for sufficiently short time scales.
Thus, we shall study
quasiequilibria, namely steady states of the rate equation for reactions (3) and (4) but not (1) and (2). In this approximation, the relevant Petri net becomes this:
Now it is impossible for \(\mathrm{ATP}\) to turn into \(\mathrm{ADP} + \mathrm{P}\) without \(\mathrm{X} + \mathrm{Y}\) also turning into \(\mathrm{XY}\). As we shall see, this is the key to coupling!
In quasiequilibrium states, we shall find a nontrivial relation between the ratios \([\mathrm{XY}]/[\mathrm{X}][\mathrm{Y}]\) and \([\mathrm{ATP}]/[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]\). This lets the cell increase the amount of XY that gets made by increasing the amount of ATP present.
Of course, this is just part of the full story. Over longer time scales, reactions (1) and (2) become important. They would drive the system toward a true equilibrium, and destroy coupling, if there were not an inflow of the reactants \(\mathrm{ATP}\), \(\mathrm{X}\) and \(\mathrm{Y}\) and an outflow of the products \(\mathrm{P}_{\mathrm{i}}\) and \(\mathrm{XY}\). To take these inflows and outflows into account, we need the theory of 'open' reaction networks... which is something I'm very interested in!
But this is beyond our scope here. We only consider reactions (3) and (4), which give the following rate equation:
$$ \begin{array}{ccl} \dot{[\mathrm{X}]} & = & -\gamma_\to [\mathrm{X}][\mathrm{ATP}] + \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ] \\ \\ \dot{[\mathrm{Y}]} & = & -\delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] +\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}] \\ \\ \dot{[\mathrm{XY}]} & = &\delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] -\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}] \\ \\ \dot{[\mathrm{ATP}]} & = & -\gamma_\to [\mathrm{X}][\mathrm{ATP}] + \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ] \\ \\ \dot{[\mathrm{ADP}]} & =& \gamma_\to [\mathrm{X}][\mathrm{ATP}] - \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ] \\ \\ \dot{[\mathrm{P}_{\mathrm{i}}]} & = & \delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] -\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}] \\ \\ \dot{[\mathrm{XP}_{\mathrm{i}} ]} & = & \gamma_\to [\mathrm{X}][\mathrm{ATP}] - \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ] -\delta_\to [\mathrm{XP}_{\mathrm{i}}][\mathrm{Y}] +\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}] \end{array} $$
Quasiequilibria occur when all these time derivatives vanish. This happens when
$$ \begin{array}{ccl} \gamma_\to [\mathrm{X}][\mathrm{ATP}] & = & \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ]\\ \\ \delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] & = & \delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}] \end{array} $$
This pair of equations is equivalent to
$$ \displaystyle{ \frac{\gamma_\to}{\gamma_\leftarrow}\frac{[\mathrm{X}][\mathrm{ATP}]}{[\mathrm{ADP}]}=[\mathrm{XP}_{\mathrm{i}} ] =\frac{\delta_\leftarrow}{\delta_\to}\frac{[\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]}{[\mathrm{Y}]} } $$
and it implies
$$ \displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} = \frac{\gamma_\to}{\gamma_\leftarrow}\frac{\delta_\to}{\delta_\leftarrow} \frac{[\mathrm{ATP}]}{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]} } $$
If we forget about the species \(\mathrm{XP}_i\) (whose presence is crucial for the coupling to happen, but whose concentration we do not care about), the quasiequilibrium does not impose any conditions other than the above relation. We conclude that, under these circumstances and assuming we can increase the ratio
$$ \displaystyle{ \frac{[\mathrm{ATP}]}{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]} } $$
it is possible to increase the ratio
$$ \displaystyle{\frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} } $$
This constitutes 'coupling'.
We can say a bit more, since we can express the ratios of forward and reverse rate constants in terms of exponentials of free energy differences using the laws of thermodynamics, as explained in Part 2. Reactions (1) and (2), taken together, convert \(\mathrm{X} + \mathrm{Y} + \mathrm{ATP}\) to \(\mathrm{XY} + \mathrm{ADP} + \mathrm{P}_{\mathrm{i}}\). So do reactions (3) and (4) taken together. Thus, these two pairs of reactions involve the same total change in free energy, so
$$ \displaystyle{ \frac{\alpha_\to}{\alpha_\leftarrow}\frac{\beta_\to}{\beta_\leftarrow} = \frac{\gamma_\to}{\gamma_\leftarrow}\frac{\delta_\to}{\delta_\leftarrow} } $$
But we're also assuming ATP hydrolysis is so strongly exergonic that
$$ \displaystyle{ \frac{\beta_\to}{\beta_\leftarrow} \gg \frac{\alpha_\leftarrow}{\alpha_\to} } $$
This implies that
$$\displaystyle{ \frac{\gamma_\to}{\gamma_\leftarrow}\frac{\delta_\to}{\delta_\leftarrow} \gg 1 }$$
Thus,
$$ \displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} \gg \frac{[\mathrm{ATP}]}{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]} } $$
This is why coupling to ATP hydrolysis is so good at driving the synthesis of \(\mathrm{XY}\) from \(\mathrm{X}\) and \(\mathrm{Y}\)! Ultimately, this inequality arises from the fact that the
decrease in free energy for the reaction
$$\mathrm{ATP} \longrightarrow \mathrm{ADP} + \mathrm{P}_{\mathrm{i}}$$
greatly exceeds the
increase in free energy for the reaction
$$\mathrm{X} + \mathrm{Y} \longrightarrow \mathrm{XY}$$
But this fact can only be put to use in the right conditions. We need to be in a 'quasiequilibrium' state, where fast reactions have reached a steady state but not slow ones. And we need fast reactions to have this property: they can only turn \(\mathrm{ATP}\) into \(\mathrm{ADP} + \mathrm{P}_{\mathrm{i}}\) if they also turn \(\mathrm{X} + \mathrm{Y}\) into \(\mathrm{XY}\). Under these conditions, we have 'coupling'.
Next time we'll see how coupling relies on an 'emergent conservation law'.
You can also read comments on Azimuth, and make your own comments or ask questions there! |
To be specific, consider a closed system of $N$ spinless particles described by the Schrodinger equation$$i\frac{d}{dt}\psi(t,\mathbf{x}_1,\mathbf{x}_2,...,\mathbf{x}_N)=H\psi(t,\mathbf{x}_1,\mathbf{x}_2,...,\mathbf{x}_N)\tag{1}$$where the Hamiltonian $H$ is $$H = \frac{\mathbf{P}_1^2}{2m_1} + \frac{\mathbf{P}_2^2}{2m_1} + \cdots+ \frac{\mathbf{P}_N^2}{2m_N}\tag{2}$$with $\mathbf{P}_n=-i\hbar\nabla_n$ being the momentum operator for the $n$-th particle, and $\nabla_n$ is the gradient with respect to $\mathbf{x}_n$. For simplicity, equation (2) assumes that the particles are non-interacting, but interaction terms $V(\mathbf{x}_j-\mathbf{x}_k)$ may be included if desired.
(To represent a system of identical bosons or fermions, simply set all the masses equal to each other and require that the wavefunction $\psi$ be totally symmetric or antisymmeric, respectively.)
The Hamiltonian is the observable corresponding to the system's total energy. A function $\psi$ that satisfies$$H\psi(t,\mathbf{x}_1,\mathbf{x}_2,...,\mathbf{x}_N)=E\psi(t,\mathbf{x}_1,\mathbf{x}_2,...,\mathbf{x}_N)\tag{3}$$would represent a state of definite energy $E$, and equation (1) says that if $\psi$ has definite energy initially, then it has that same definite energy forever.
In general, the state $\psi$ of interest won't have strictly definite energy, but any normalizable state $\psi$ can be written as a linear combination (superposition) of functions that do. Using $\langle\cdots\rangle$ to denote the expectation value of an operator "$\cdots$" in a given state $\psi$, we can define the mean energy $U\equiv\langle H\rangle$ and the standard deviation $$\Delta U\equiv \sqrt{\langle H^2\rangle-\langle H\rangle^2}.$$These definitions make sense whether or not any measurements have been made, and equation (1) implies that the quantities $U$ and $\Delta U$ are both independent of time. Even if $\psi$ doesn't have a definite energy, as long as $\Delta U$ is small compared to $U$, we can still reasonably refer to $U$ as "the" internal energy of the system, with the understanding that this is just an (excellent) approximation. If desired, we could think of this as being the result of an imperfect measurement of $H$ (with finite resolution $\Delta U$), but that's not necessary.
Referring to $U$ as "the" internal energy of the system makes sense for the same reason that it makes sense to talk about the location of a book resting on a table, even though we know that the book is a quantum object that doesn't really have a
strictly well-defined location. As long as the quantum spread in the book's location is neglible for most practical purposes, we can reasonably talk about "the" location of the book.
Whether or not $\psi$ has a definite total energy, the individual particles typically
don't have well-defined energies. In other words, even if $\psi$ is an eigenstate of $H$, it generally won't be an eigenstate of the single-particle operator $\mathbf{P}_n^2/2m_n$. The individual particles generally don't even have wavefunctions of their own, because they are typically entangled with each other. In other words, the multi-particle wavefunction $\psi$ typically cannot be factorized into single-particle wavefunctions. Still, as long as $\Delta U$ is small compared to $U$, we can refer to $U$ as "the" internal energy of the system, as a good approximation.
Now, what if $U$ is not equal to any of the possible eigenvalues $E$ in (3)? As pointed out in a comment by the OP, this can occur if the system is restricted to a finite volume, in which case the particles' momenta (and therefore the total energy) are restricted to a discrete set of values. That's okay, because as long as the volume is large enough, the spacing between the allowed energies will be so fine that it might as well be continuous for all practical purposes. The definition $U=\langle H\rangle$ does not require that $U$ be precisely equal to any eigenvalue of $H$, nor does the idea that $U$ represents the system's total internal energy — with the understanding that this is just an (excellent) approximation.
Altogether, for all practical purposes, we can regard $U=\langle H\rangle$ as "the" total internal energy of the system as long as $\Delta U$ is sufficiently small. Measurement is not required for this to make sense, and equation (1) implies that this $U$ is indeed constant in time. |
Absolute Value Function [math]|x| =\left\{\begin{array}{ll}x \quad\textrm{for} \quad x \geq 0\\-x \quad\textrm{for}\quad x \lt 0\end{array}\right.[/math] Example(s): |-3|=3 [math]|(3,0,4)|=\sqrt{3^2+0^2+4^2}=5[/math] [math] |3+i4|=\sqrt{3^2+4^2}=5 [/math] Counter-Example(s): References 2016 (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Absolute_value Retrieved 2016-07-10 In mathematics, the absolute valueor modulus| x| of a real number
xis the non-negative value of
xwithout regard to its sign. Namely, |
x| = xfor a positive
x, |
x| = − xfor a negative
x(in which case −
xis positive), and |0| = 0. For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero.
Generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example, an absolute value is also defined for the complex numbers, the quaternions, ordered rings, fields and vector spaces. The absolute value is closely related to the notions of magnitude, distance, and norm in various mathematical and physical contexts.
In mathematics, the (Eric W. Wesstein, 2016) ⇒ Weisstein, Eric W. (1999-2016)"Absolute Value." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/AbsoluteValue.html Retrieved 2016-07-10 [math]|x|=xsgn(x)[/math] 2008 (Upton & Cook, 2008) ⇒ Graham Upton, and Ian Cook. (2008). “A Dictionary of Statistics, 2nd edition revised." Oxford University Press. ISBN:0199541450 |
Chemical Potential The Chemical Potential: Simple Thermodynamics of Chemical Processes
The chemical potential $\mu$, which is simply the free energy per molecule, is probably the most useful thermodynamic quantity for describing and thinking about chemical systems. Because $\mu$ represents an energy for one molecule, it is easy to think about concretely. In fact, for a system consisting of molecule types A, B, C, etc., each occurring with $N_A$, $N_B$, ... copies, we can write the total free energy at constant pressure exactly as
To focus on the chemical potential itself, we can say that $\mu_X$ is the energy required to add a molecule of type X to a system at temperature $T$ and pressure $P$, which consists of $N_A$ molecules of type A, $N_B$ molecules of type B, and so on. Importantly, the chemical potential is not a physical constant: implicit in $\mu$ are its many dependences
which often may not be written out.
The key point is that, in general, the chemical potential depends on everything about the system. To give a simple example of such dependence, molecule type A could represent protons in solution (implying a certain pH), which would affect the energy required to add an acidic molecule of type B. As another example, in a solution of charged molecules of type A, the chemical potential $\mu_A$ will depend on how many A molecules are already present (i.e., the concentration [A]) because charged molecules repel one another. Note that the apparently simple form of Eq. (1) does notimply the molecule types are independent of one another because the $\mu$ values change as the $N$ values do.
To understand chemical equilibrium, a fundamental reference point in biochemistry, it turns out to be important to write the chemical potential as a derivative of the free energy. From the definition above, we want to know the incremental change in free energy $\Delta G$ based on adding a single molecule of type X - i.e., for $\Delta N_X = 1$. Using basic calculus ideas, we can write
Chemical reactions: Equilibrium and beyond
The chemical potential, by design, is the perfect tool for analyzing chemical behavior. Let us first consider the very simplest chemical reaction:
When getting started with a new system, it's always a good plan to examine the equilibrium point. Because the free energy will be minimized with respect to adjustable paramters ($N_A$ and $N_B$ in this case) at equilibrium, we should differentiate $G$ with respect to either $N_A$ or $N_B$. In fact, there is really only one adjustable parameter in this system since $N_B = N - N_A$ and $N$ is assumed constant. Using the basic rules of calculus and the definition of $\mu$ (3), we find
because $ d N_B / d N_A = -1$ at fixed total $N$. The equilibrium point itself is obtained by setting the $G$ derivative to zero (to minimize $G$), yielding the key chemical result that chemical potentials will match in equilibrium:
This equality of chemical potentials does
not imply that $N_A = N_B$ at equilibrium: in general, the equilibrium point will imply quite different reactant and product numbers (and hence concentrations). After all, most reactions substantially favor either products or reactants under a given set of conditions.
Connection to reaction rates, free energy, and standard free energy
The chemical potential must be intimately related to reaction rates and standard free energies, which also can be said to "control" chemical reactions. We will begin to clarify those connections now. Briefly, the standard free energy indicates where the equilibrium point of a reaction will be, as does the ratio of reaction rate constants. The difference in chemical potentials, like the free energy difference (compared to the standard free energy) tells us about the displacement from equilibrium.
We'll start with the standard free energy change of reaction $\dgstd$, where the "$\circ$" indicates standard temperature and pressure and the prime (') denotes pH = 7. From any chemistry or biochemistry book, the definition (for our simple $A \rightleftharpoons B$ reaction) is
where it's essential to realize that this applies to the equilibrium concentrations at standard conditions. The second line defines the equilibrium constant $\kkeq = \conceq{B} / \conceq{A}$.
Eq. (9) tells us the precise relation between the equilibrium concentrations and the standard free energy. When the product (B) is "favored" (i.e., when $\conceq{B} > \conceq{A}$), then $\dgstd$ is negative, but when reactant A is favored then $\dgstd$ is positive. Turning that around, $\dgstd$ merely tells us the relative A and B concentrations that will occur at equilibrium. As we will see below, $\dgstd$
absolutely does not indicate whether a given reaction will tend to proceed in the forward direction, because that depends on the immediate (starting) species concentrations - which typically are not in equilibrium. Again, $\dgstd$ only indicates what the ratio of concentrations will be at equilibrium.
From the detailed-balance condition of equilibrium, we know that the ratio of rate constants also yield the equilibrium ratio of concentrations.
where $k_+$ is the rate constant for the A to B direction and $k_-$ is for the reverse reaction. This enables to write the standard free energy change in term of rates constants:
Using the explicit dependence of chemical potential on concentration, namely $\mu_X = \mu^{\circ}_X + RT \ln \conc{X}$ derived below in Eq. (27) assuming independent molecules, we can also re-write the equality of chemical potentials (8) in equilibrium as
Rearranging and comparing to the defintion of $\dgstd$ from Eq. (9), we have
where again the prime (') indicates we are considering pH = 7. It is very useful to also be familiar with exponentiated form of this same relation:
which makes it simpler to grasp that higher standard free energy of B (relative to A) decreases the relative equilibrium concentration of B, as shown in the figure by the green equilibrium horizontal tie lines.
We have now seen the full set of equilibrium connections among the chemical potential, standard free energy difference, and reaction rates. Most importantly, the standard free energy difference (and standard chemical potentials) merely set the equilibrium point. A lower standard chemical potential means a larger equilibrium concentration, as we see from Eq. (13).
Below, we will generalize these results for more complicated reactions.
What $\mu$ tells us about out-of-equilibrium systems
Biological systems are rarely in equilibrium, so it is essential to understand non-equilibrium behavior. The result (8) that chemical potentials match in equilibrium is equally valuable for what it tells us about systems which are
out of equilibrium, namely, that the chemical potentials are not equal. Such non-equilibrium systems will always tend to be driven toward equilibrium.
Consider a case where $\mu_A > \mu_B$ (so $\Delta \mu = \mu_B - \mu_A < 0$), and assume the chemical potential depends on concentration according to Eq. (27), derived below. Then we have
which can be re-written as
That is, in this particular non-equilibrium example, the concentration of A is (relatively) higher than its equilibrium value: compare to Eq. (14). As shown in the figure below, for a non-equilibrium reaction with $\Delta \mu < 0$, the concentrations of A and B differ from the equilibrium values which would be described by a horizontal tie line (equal $\mu$ values).
Importantly, the chemical potential difference $\Delta \mu = \mu_B - \mu_A$ tells us the actual free energy required (or released, if negative) when a reaction occurs at a specific set of conditions - which is absolutely critical for understanding cellular reactions such as the hydrolysis of ATP under cellular conditions. (Remember that the standard free energy change $\dgstd$ only tells us the equilibrium point and
not the free energy change for a realistic reaction.) When $\Delta \mu = 0$, we have equilibrium, and no free energy is required to push the reaction. The figure shows two examples of non-equilibrium reactions: the vertical components of the brown arrows represent the $\Delta \mu$ values in each case.
Using Eq. (27), we can write the independent-molecule approximation for the change in chemical potential which is precisely equal to $\Delta G$ for the reaction, which can be seen from Eq. (1) for a single reaction which corresponds to $\Delta N_B = - \Delta N_A = 1$ since one A molecule is converted to B.
You can probably guess that $\Delta \mu$ for ATP hydrolysis is large and negative
under typical cellular conditions though it can take on any value - even positive - under different conditions. Of course, the cell has tricks for making unfavorable reactions ($\Delta \mu > 0$) occur - by coupling them to other reactions that are more favorable still, so that the overall $\Delta \mu$ is negative. More complex reactions
The book by Hill or any chemical thermodynamics text will explain the full generalization of the preceding results. Here we consider a somewhat more general reaction that will be useful for nucleotides, namely,
with forward and reverse rate constants $k_+$ and $k_-$. A key example is ATP + H$_2$O $\rightleftharpoons$ ADP + Pi.
which is the free energy change per reaction in the forward direction. Thus, for example, if the concentrations of C and D exceed their equilibrium values, then the free energy change is positive - take the log of Eq. (19) to see this. A positive free energy change for a reaction implies that free energy is required for the reaction to occur. In other words, on average, the reaction will occur spontaneously only in reverse, which makes sense since the level of "products" (C and D) is elevated.
ATP free energy under cellular conditions
ATP is famously the fuel of the cell, but to understand this quantitatively - or even in a qualitatively accurate way - we need to use the free energy and chemical potential concepts developed here. The ATP hydrolysis reaction is
with $\dgstd = -7.3$ kcal/mol (which refers to the forward/hydrolysis direction) according to Berg's textbook. That is,
under standard conditions ADP is favored over ATP - i.e., in equilibrium the concentration of ADP will greatly exceed that of ATP - given typical water and phosphate concentrations. In the cell, however, there is often more ATP than ADP, leading to an even larger (more negative) $\Delta G \simeq - 12$ kcal/mol via Eq. (20).
where we have used the fact that the concentration of water will change only negligibly due the reaction. If $\Delta G \simeq - 12$ kcal/mol, that means that the cellular concentrations (numerator) differ from the equilibrium concentrations (denominator) by more than seven orders of magnitude! It is in this sense only that ATP is a highly activated molecule: the concentrations are kept far from equilibrium. Even though $\dgstd$ is large and negative, if the cellular concentrations matched the equilibrium values there would be no usable $\Delta G$.
The figure shows the free energy considerations for ATP hydrolysis in a very schematic way - assuming the concentrations of water and phosphate are constant (though [Pi] certainly would change significantly). Qualitatively, the figure gives a correct sense of hydrolysis free energetics: the red ADP chemical potential is well below that for ATP (blue), and for cellular hydrolysis the actual free energy change per reaction is greater than the vertical distance ($\dgstd$ for standard conditions) because cellular [ATP] typically exceeds [ADP].
The immediate energy source for synthesis of ATP - addition of phosphate to ADP - is a proton electrochemical gradient in most cells. The phosphorylation is accomplished by the ATP synthase using a remarkable series of free energy transductions, from chemical to mechanical, and back to chemical energy.
Deriving the (approximate) anayltical form for the chemical potential
Building on our statistical mechanics discussion of the ideal gas, we can derive an equation for the chemical potential of a system of molecules, under the assumption they do not interact with one another. In this sense, the molecules are "ideal" in the same way as ideal gas particles. To fully appreciate the brief derivation below, readers should first review the partition function calculation in the ideal gas section.
Without writing down all the details, for $N$ non-interacting molecules of type X in a fixed volume V, the partition function becomes
(23)
where the dimensionless factor $q_X$ (which does not appear in the simple ideal gas partition function) represents integration of the Boltzmann factor over all degrees of freedom
internal to the molecule - bond lengths, angles, and dihedrals. Thus, $q_X$ is an internal partition function which may include substantial intra-molecular interactions, but any inter-molecular interactions are neglected. Note that the factorized form (23) is expected for non-interacting/independent species because the energy in the Boltzmann factor does not connect different molecules; separating the center-of-mass behavior as the usual ideal gas factors is simply a mathematical trick: see, e.g., books by Hill and Zuckerman for further details.
We can derive the free energy itself, using the relation $F = -k_B T \ln Z$ along with Stirling's approximation, which leads to
Although we will not prove it here, the chemical potential can be derived either by differentiating the Gibbs free energy $G$ or the Helmholtz free energy $F$. The two will be equivalent when volume fluctuations are not important, which is what we expect for systems of biological interest, which tend to contain a large number of reactant and product molecules. (See the book by Zuckerman for more detailed discussion of volume fluctuations.) Assuming this equivalence, we have
It is convenient to separate this out into two terms using the rules of logarithms:
where we have explicitly shown the standard one molar ("1M") concentration which makes arguments of the logs dimensionless and also makes the first term the standard chemical potential $\mu^{\circ}_X = k_B T \ln \left( \mbox{1M} \lambda_X^3 / q_X \right)$. The full chemical potential is then, in molar energy units
References T.L. Hill, An Introduction to Statistical Thermodynamics,Dover, 1986. J. M. Berg et al., "Biochemistry", W. H. Freeman. The 2002 edition is online for free. D.M. Zuckerman, Statistical Physics of Biomolecules: An Introduction(CRC Press, 2010). Exercises Explain the quantitative relationship between $k_B T$ which uses the Boltzmann constant and $RT$ which uses the gas constant. To do this, convert $k_B T$ in MKS units for $T$ = 300K to units of kcal/mole. For the reaction A + B $\rightleftharpoons$ C, starting from the explicit form of the Gibbs free energy (1), derive the equilibrium relation $\mu_A + \mu_B = \mu_C$. Use the fact that $N_C$ must increase when $N_A$ and $N_B$ decrease. |
Probability is the extent to which an event is likely to occur, measured by the ratio of the favourable cases to the whole number of cases possible.
When it comes to GMAT exam, all you have to do is make all the cases and select the favourable one.\(Probability = \frac{Number \; of \; Favourable \; Case}{Total \; Number \; of \; Cases}\)
Points to Remember: Whenever there is a word ‘and’, you should multiply the probability. Whenever there is a word ‘or’, you should add the probability. For mutually independent probability, you can add the probability. \(P(A\cup B) = P(A) + P(B) P(A\cap B)\)
Let’s look at some of the problems –
There is 10% chance that it won’t snow during winter, there is a 20% chance that workplace will not be closed during winter. What is the probability that it will snow and workplace will be closed during the winter?
P(it will snow) = 1 -P (it won’t snow) =1 – 0.1 = 0.9
P(workplace will closed) = 1 – P (workplace will be open) = 1 – 0.2 =0.8
Since we can see the word ‘and’ we need to multiply the probability. Hence the probability is \(0.8 \times 0.9 = 0.72 \; or \; 72%\)
A telephone number contains 10 digits, including a 3- digit area code. Bob remembers the area code and next 4 digits of the number. He also remembers that the remaining digits are not 0, 2, 3, 5 or 9. If Bob tries to find the number by guessing the remaining digits at random, then find the probability that he will able to find the correct number in at most 2 attempts.
Read more: 10 Commandments to GMAT Verbal
We’ll be glad to help you in your GMAT preparation journey. You can ask for any assistance related to GMAT and MBA from us by just giving a missed call at
+918884544444, or you can drop an SMS. You can write to us at [email protected]. |
Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.
Please help promote PhysicsOverflow ads elsewhere if you like it.
New printer friendly PO pages!
Migration to Bielefeld University was successful!
Please vote for this year's PhysicsOverflow ads!
Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!
... see more
(propose a free ad)
Is the critical point in critical phenomena the same thing of the fixed point of renormalization group flow? If yes, why? And if no, then what is the relation between the critical point and the renormalization group flow?
My understanding is the following. Consider the space, $\mathcal{H}$ of Hamilitonians spanned by several operators $O_i$ (that is $H$ ($\in \mathcal{H}$) = $\sum c_i O_i$). When we do RG, some operators (so called irrelevant operators) become less and less important as we go to large length scales. Some operators (so called relevant operators) becomes more and more important as we go to large length scales and it is coefficients of these operators that determine what phase we are in. One can get to the critical point by tuning the coefficients of these (relevant) operators. At the critical point, $\mathcal{H}$ is spanned only by the irrelevant operators. The subspace of $\mathcal{H}$, that is spanned by only the irrelevant operators is called the critical surface. Fixed point is a special point in this critical surface.
@Thhin Subhra Why do you say at the critical point, $\mathcal{H}$ is spanned only by the irrelevant operators? And at the fixed point, only relevant (and marginal) operators survive, how can the fixed point belong to the critical surface by your definition?
@Wein Eld I do not really understand the inconsistency that you are trying to point out. What I have in mind is the following. Suppose the space $\mathcal{H}$ is spanned by $(O_1,O_2,O_3)$. Let $O_1$ be relevant and $O_2, O_3$ irrelevant. Any point $H$ in $\mathcal{H}$ can be specified by three numbers $(x_1, x_2, x_3) \in R^3 (\sim \mathcal{H})$: $H = x_1 O_1+ x_2 O_2 +x_3 O_3$. Critical point is given by a particular value of $x_1$ (the coefficient of relevant operator), say $x_1 =1$ which describes an $x_2-x_3$ plane (the critical surface). Any point in this plane will remain in this plane under RG flow. The fix point lies somewhere in this plane where the RG flow stops.
The short answer is yes: critical points are effectively described by scale invariant field theories, which are by definition fixed points of the renormalization group flow. A longer answer would probably depend on the precise choice of definition of a critical point.
Could you please make the answer somewhat longer?
user contributions licensed under cc by-sa 3.0 with attribution required |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Measurement of the top quark mass with lepton+jets final states using $$\mathrm {p}$$ p $$\mathrm {p}$$ p collisions at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV
The European Physical Journal C, ISSN 1434-6044, 11/2018, Volume 78, Issue 11, pp. 1 - 27
The mass of the top quark is measured using a sample of $${{\text {t}}\overline{{\text {t}}}$$ tt¯ events collected by the CMS detector using proton-proton...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
2. Measurement of prompt and nonprompt $$\mathrm{J}/{\psi }$$ J / ψ production in $$\mathrm {p}\mathrm {p}$$ p p and $$\mathrm {p}\mathrm {Pb}$$ p Pb collisions at $$\sqrt{s_{\mathrm {NN}}} =5.02\,\text {TeV} $$ s NN = 5.02 TeV
The European Physical Journal C, ISSN 1434-6044, 04/2017, Volume 77, Issue 4, pp. 1 - 27
Abstract This paper reports the measurement of $$\mathrm{J}/{\psi }$$ J / ψ meson production in proton–proton ( $$\mathrm {p}\mathrm {p}$$ p p ) and...
Journal Article
3. Measurement of the ratio of inclusive cross sections $\sigma (p\bar{p} \rightarrow Z+2~b \text{jets}) / \sigma (p\bar{p} \rightarrow Z+ \text{2 jets})$ in $p\bar{p}$ collisions at $\sqrt s=1.96$ TeV
Physical Review D, ISSN 1550-7998, 01/2015, Volume 91, Issue 5, p. 052010
Journal Article
Physical Review Letters, ISSN 0031-9007, 06/2015, Volume 115, Issue 1, p. 012301
The second-order azimuthal anisotropy Fourier harmonics, nu(2), are obtained in p-Pb and PbPb collisions over a wide pseudorapidity (.) range based on...
DISTRIBUTIONS | PLUS AU COLLISIONS | LEE-YANG ZEROS | ANISOTROPIC FLOW | PARTICLES | PHYSICS, MULTIDISCIPLINARY | ECCENTRICITIES | NUCLEAR COLLISIONS | PROTON-PROTON | Correlation | Large Hadron Collider | Anisotropy | Dynamics | Collisions | Luminosity | Charged particles | Dynamical systems
DISTRIBUTIONS | PLUS AU COLLISIONS | LEE-YANG ZEROS | ANISOTROPIC FLOW | PARTICLES | PHYSICS, MULTIDISCIPLINARY | ECCENTRICITIES | NUCLEAR COLLISIONS | PROTON-PROTON | Correlation | Large Hadron Collider | Anisotropy | Dynamics | Collisions | Luminosity | Charged particles | Dynamical systems
Journal Article
5. Study of the underlying event in top quark pair production in $$\mathrm {p}\mathrm {p}$$ p p collisions at 13 $$~\text {Te}\text {V}$$ Te
The European Physical Journal C, ISSN 1434-6044, 02/2019, Volume 79, Issue 2
Journal Article
European Physical Journal C, ISSN 1434-6044, 11/2018, Volume 78, Issue 11
Journal Article
7. Observation of Charge-Dependent Azimuthal Correlations in p-Pb Collisions and Its Implication for the Search for the Chiral Magnetic Effect
Physical Review Letters, ISSN 0031-9007, 03/2017, Volume 118, Issue 12, pp. 122301 - 122301
Charge-dependent azimuthal particle correlations with respect to the second-order event plane in p-Pb and PbPb collisions at a nucleon-nucleon center-of-mass...
PARITY VIOLATION | SEPARATION | PHYSICS, MULTIDISCIPLINARY | FIELD | Hadrons | Correlation | Large Hadron Collider | Searching | Correlation analysis | Collisions | Solenoids | Atomic collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PARITY VIOLATION | SEPARATION | PHYSICS, MULTIDISCIPLINARY | FIELD | Hadrons | Correlation | Large Hadron Collider | Searching | Correlation analysis | Collisions | Solenoids | Atomic collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
European Physical Journal C, ISSN 1434-6044, 04/2017, Volume 77, Issue 4
Journal Article
9. Measurement of the top quark mass with lepton+jets final states using $\mathrm {p}$$ $$\mathrm {p}$ collisions at $\sqrt{s}=13\,\text {TeV}
European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 11/2018, Volume 78, Issue 11
The mass of the top quark is measured using a sample of $\mathrm{t\overline{t}}$ events containing one isolated muon or electron and at least four jets in the...
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
Physical Review Letters, ISSN 0031-9007, 07/2015, Volume 115, Issue 4
We present a measurement of the fundamental parameter of the standard model, the weak mixing angle sin2θℓeff which determines the relative strength of weak and...
Journal Article
11. Observation of Correlated Azimuthal Anisotropy Fourier Harmonics in pp and p+Pb Collisions at the LHC
Physical Review Letters, ISSN 0031-9007, 02/2018, Volume 120, Issue 9, pp. 092301 - 092301
The azimuthal anisotropy Fourier coefficients (v_{n}) in 8.16 TeV p+Pb data are extracted via long-range two-particle correlations as a function of the event...
Anisotropy | Correlation analysis | Collisions | Rangefinding | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Anisotropy | Correlation analysis | Collisions | Rangefinding | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS
Journal Article
12. Measurement of the weak mixing angle using the forward–backward asymmetry of Drell–Yan events in $$\mathrm {p}\mathrm {p}$$ pp collisions at 8$$\,\text {TeV}$$ TeV
The European Physical Journal C, ISSN 1434-6044, 9/2018, Volume 78, Issue 9, pp. 1 - 30
A measurement is presented of the effective leptonic weak mixing angle ($$\sin ^2\theta ^{\ell }_{\text {eff}}$$ sin2θeffℓ ) using the forward–backward...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article |
Time Limit: 2 Seconds
Memory Limit: 65536 KB
After a hard struggle, DreamGrid was finally admitted to a university. Now he is having trouble calculating the limit of the ratio of two polynomials. Can you help him?
DreamGrid will give you two polynomials of a single variable \(x\) (eg. x^2-4x+7) or constant integers, and then he will tell you an integer \(x_0\). Your job is to find out the limit of a ratio consisting of these two polynomials (or constant integers) when \(x\) tends to \(x_0\). The first polynomial is the numerator and the second one is the denominator.
The first line of input contains an integer \(T\) (\(1 \le T \le 50\)), which indicates the number of test cases. For each test case:
The first two lines describe two polynomials or constant integers, consisting of integers, 'x', '+', '-', and '^' without any space. The coefficients range from -9 to 9, and the exponents range from 1 to 9 (If the exponent is 1, it will be omitted and won't be displayed as '^1'). The operaters will be seperated by integers or 'x' (You won't see '-+x' in the input).
The third line is the integer \(x_0\), ranging from -9 to 9.
It's guaranteed that there won't be two same exponents in the same polynomial, and the numerator and denominator won't be both constant 0.
Output 1 line for each case.
If the limit exists, you should output it as the simplest fration (eg. -1, -1/6, 0, 3/2, 2, 3). Otherwise, output "INF" (not including the quotation marks).
\(\lim\limits_{x \to 1}\frac{x^2-2x+1}{x^2-1} = 0\), and \(\lim\limits_{x \to 9}\frac{9x^8}{-9x^9} = -\frac{1}{9}\). |
№ 8
All Issues Tri-additive maps and local generalized $(α,β)$-derivations Abstract
Let $R$ be a prime ring with nontrivial idempotents. We characterize a tri-additive map $f : R^3 \rightarrow R$ such that $f(x, y, z) = 0$ for all $x, y, z \in R$ with $xy = yz = 0$. As an application, we show that, in a prime ring with nontrivial idempotents, any local generalized $(\alpha , \beta)$-derivation (or a generalized Jordan triple $(\alpha , \beta)$-derivation) is a generalized $(\alpha , \beta)$-derivation.
Citation Example: Jamal M. R., Mozumder M. R. Tri-additive maps and local generalized $(α,β)$-derivations // Ukr. Mat. Zh. - 2017. - 69, № 6. - pp. 848-853. Full text |
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to [email protected]
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
Simple answer: If there does exist a more efficient algorithm that runs in $O(n^{\delta})$ time for some $\delta < 2$, then the strong exponential time hypothesis would be refuted.
We will prove a stronger theorem and then the simple answer will follow.
Theorem: If we can solve the intersection non-emptiness problem for two DFA's in $O(n^{\delta})$ time, then any problem that's non-deterministically solvable using only n bits of memory is deterministically solvable in $poly(n)\cdot2^{(\delta n/2)}$ time.
Justification: Suppose that we can solve intersection non-emptiness for two DFA's in $O(n^{\delta})$ time. Let a non-deterministic Turing machine M with a read only input tape and a read/write binary work tape be given. Let an input string x of length n be given. Suppose that M doesn't access more than n bits of memory on the binary work tape.
A computation of M on input x can be represented by a finite list of configurations. Each configuration consists of a state, a position on the input tape, a position on the work tape, and up to n bits of memory that represent the work tape.
Now, consider that the work tape was split in half. In other words, we have a left section of $\frac{n}{2}$ cells and a right section of $\frac{n}{2}$ cells. Each configuration can be broken up into a left piece and a right piece. The left piece consists of the state, the position on the input tape, the position on the work tape, and the $\frac{n}{2}$ bits from the left section. The right piece consists of the state, the position on the input tape, the position on the work tape, and the $\frac{n}{2}$ bits from the right section.
Now, we build a DFA $D_1$ whose states are left pieces and a DFA $D_2$ whose states are right pieces. The alphabet characters are instructions that say which state to go to, how the tape heads should move, and how the work tape's active cell should be manipulated.
The idea is that $D_1$ and $D_2$ read in a list of instructions corresponding to a computation of M on input x and together verify that it is valid and accepting. Both $D_1$ and $D_2$ will always agree on where the tape heads are because that information is included in their input characters. Therefore, we can have $D_1$ verify that the instruction is appropriate when the work tape position is in the left piece and $D_2$ verify when in the right piece.
In total, there are at most $poly(n) \cdot 2^{n/2}$ states for each DFA and at most $poly(n)$ distinct alphabet characters.
By the initial assumption, it follows that we can solve intersection non-emptiness for the two DFA's in $poly(n) \cdot 2^{(\delta n /2)}$ time.
You might find this helpful: https://rjlipton.wordpress.com/2009/08/17/on-the-intersection-of-finite-automata/
CNF-SAT is solvable using $k+O(\log(n))$ bits of memory where k is the number of variables. The preceding construction can be used to show that if we can solve intersection non-emptiness for two DFA's in $O(n^{\delta})$ time, then we can solve CNF-SAT in $poly(n) \cdot 2^{(\delta k/2)}$ time. Therefore, the
simple answer holds.
Comments, corrections, suggestions, and questions are welcomed. :) |
Let \( A \) be any real square matrix (not necessarily symmetric). Prove that: $$ (x'A x)^2 \leq (x'A A'x)(x'x) $$
The key point in proving this inequality is to recognize that \( x'A A'x \) can be expressed as vector norm of \( A'x \).
Proof:
If \( x=0 \), then the inequality is trival.
Suppose \( x \neq 0 \).
\( \frac{x'A x}{x'x}
= \frac{(A'x)'x}{\| x \|^2} = (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} \)
Because \( \frac{x}{\| x \|} \) is a unit vector, \( A'\frac{x}{\| x \|} \) can be considered as scale and rotation of \( \frac{x}{\| x \|} \) by \( A' \). Thus, the resulting vector norm of \( A'\frac{x}{\| x \|} \) is \( \alpha \) for some \( \alpha > 0 \). And \( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|}=\alpha \, cos(\beta) \) for some \( -\pi \leq \beta \leq \pi \), which is the angle between before and after premulitplying \( A' \).
Now:
\( ( \frac{x'A x}{x'x} )^2 \)
\(= ( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} )^2 \)
\( =\alpha^2 \, cos(\beta)^2 \)
\( \leq \alpha^2 \)
\(= (A'\frac{x}{\| x \|})'A'\frac{x}{\| x \|} \)
\(= \frac{(A'x)'A'x}{\| x \|^2} \)
\(= \frac{x'A A'x}{x'x} \)
Finally, multiplying both sides by \( (x'x)^2 \) completes the proof. |
Difference between revisions of "NTS ABSTRACT"
(→Dec 17)
(→Dec 17)
Line 177: Line 177:
|-
|-
| bgcolor="#BCD2EE" |
| bgcolor="#BCD2EE" |
−
In the original Gross-Zagier formula and Zhang's extension to Shimura curves, the modularity of the generating function [[File:
+
In the original Gross-Zagier formula and Zhang's extension to Shimura curves, the modularity of the generating function [[File:.png]]
Revision as of 15:47, 16 November 2015
Return to NTS Fall 2015
Contents Sep 03
Kiran Kedlaya On the algebraicity of (generalized) power series
A remarkable theorem of Christol from 1979 gives a criterion for detecting whether a power series over a finite field of characteristic p represents an algebraic function: this happens if and only if the coefficient of the n-th power of the series variable can be extracted from the base-p expansion of n using a finite automaton. We will describe a result that extends this result in two directions: we allow an arbitrary field of characteristic p, and we allow "generalized power series" in the sense of Hahn-Mal'cev-Neumann. In particular, this gives a concrete description of an algebraic closure of a rational function field in characteristic p (and corrects a mistake in my previous attempt to give this description some 15 years ago).
Sep 10
Sean Rostami Fixers of Stable Functionals
The epipelagic representations of Reeder-Yu, a generalization of the "simple supercuspidals" of Gross-Reeder, are certain low-depth supercuspidal representations of reductive algebraic groups G. Given a "stable functional" f, which is a suitably 'generic' linear functional on a vector space coming from a Moy-Prasad filtration for G, one can create such a representation. It is known that the representations created in this way are compactly induced from the fixer in G of f and it is important to identify explicitly all the elements that belong to this fixer. This work is in-progress.
Sep 17
David Zureick-Brown Tropical geometry and uniformity of rational points
Let X be a curve of genus g over a number field F of degree d = [F:Q]. The conjectural existence of a uniform bound N(g,d) on the number #X(F) of F-rational points of X is an outstanding open problem in arithmetic geometry, known to follow from the Bomberi-Lang conjecture. We prove a special case of this conjecture - we give an explicit uniform bound when X has Mordell-Weil rank r ≤ g-3. This generalizes recent work of Stoll on uniform bounds on hyperelliptic curves. Using the same techniques, we give an explicit, unconditional uniform bound on the number of F-rational torsion points of J lying on the image of X under an Abel-Jacobi map. We also give an explicit uniform bound on the number of geometric torsion points of J lying on X when the reduction type of X is highly degenerate. Our methods combine Chabauty-Coleman's p-adic integration, non-Archimedean potential theory on Berkovich curves, and the theory of linear systems and divisors on metric graphs. This is joint work with Joe Rabinoff and Eric Katz.
Sep 22
Joseph Gunther Embedding Curves in Surfaces and Stabilization of Hypersurface Singularity Counts
We'll present two new applications of Poonen's closed point sieve over finite fields. The first is that the obvious local obstruction to embedding a curve in a smooth surface is the only global obstruction. The second is a proof of a recent conjecture of Vakil and Wood on the asymptotic probability of hypersurface sections having a prescribed number of singularities.
Sep 24
Brandon Alberts The Moments Version of Cohen-Lenstra Heuristics for Nonabelian Groups
Cohen-Lenstra heuristics posit the distribution of unramified abelian extensions of quadratic fields. A natural question to ask would be how to get an analogous heuristic for nonabelian groups. In this talk I take and extend on recent work in the area of unramified extensions of imaginary quadratic fields and bring it all together under one Cohen-Lenstra style heuristic.
Oct 08
Ana Caraiani On vanishing of torsion in the cohomology of Shimura varieties
I will discuss joint work in progress with Peter Scholze showing that torsion in the cohomology of certain compact unitary Shimura varieties occurs in the middle degree, under a genericity assumption on the corresponding Galois representation.
Oct 15
Valentin Blomer Arithmetic, geometry and analysis of a senary cubic form
We establish an asymptotic formula (with power saving error term) for the number of rational points of bounded height for a certain cubic fourfold, thereby proving a strong form of Manin's conjecture for this algebraic variety by techniques of analytic number theory.
Oct 22
Brian Cook Configurations in dense subsets of Euclidean spaces
A result of Katznelson and Weiss states that given a suitably dense (measurable) subset of the Euclidean plane realizes every sufficiently large distance, that is, for every prescribed (sufficiently large) real number the set contains two elements whose distance is this number. The analogue of this statement for finding three equally spaced points on a line, i.e. for finding three term arithmetic progressions, in a given set is false, and in fact false in every dimension. In this talk we revisit the case of three term progressions when the standard Euclidean metric is replaced by other metrics.
Oct 29
Aaron Levin Integral points and orbits in the projective plane
We will discuss the problem of classifying the behavior of integral points on affine subsets of the projective plane. As an application, we will examine the problem of classifying endomorphisms of the projective plane with an orbit containing a Zariski dense set of integral points (with respect to some plane curve). This is joint work with Yu Yasufuku.
Nov 12
Vlad Matei A geometric perspective on Landau's theorem over function fields
We revisit the recent result of Lior-Bary-Soroker. It deals with a function field analogue of Landau's classical result about the asymptotic density of numbers which are sums of two integer squares. The results obtained are just in the large characteristic and large degree regime. We obtain a characterization as q
Dec 17
Tonghai Yang A generating function of arithmetic divisors in a unitary Shimura variety: modularity and application
is a very important step, where Z(m) are the Heeger divisors and [\xi] is the rational canonical divisor of degree 1 (associated to Hodge bundle). In the proof, they actually use arithmetic version in calculation: [\hat{\xi}] + \sum \hat{Z}(m) q^m \in \widehat{CH}^1(X) \otimes \C which is also modular. In this talk, we define a generalization of this arithmetic generating function to unitary Shimura variety of type (n, 1) and prove that it is modular. It has application to Colmez conjecture and Gross-Zagier type formula.
Dec 17
Nathan Kaplan Coming soon...
Coming soon... |
Consider for some $0 < \alpha \leq 1$ the space functions $x:[0,1] \to \mathbb{R}^n$ such that $x(0) = 0$ and $\sup_{s,t} \frac{\|f(t)-f(s)\|}{|t-s|^{\alpha}}$ is finite.
There are at least two reasonable norms defined on this space. The first is the Hölder norm which is just the supremum above. Another is the $1/\alpha$-variation which is the supremum over all partitions $t_0 = 0 \le t_1 \le \cdots \le t_r = 1$ of $\left(\sum_{i=0}^{r-1}|f(t_{i+1}) - f(t_i)|^{1/\alpha}\right)^\alpha$.
Let us fix $\alpha= \frac{1}{2}$ and $x(t) = \sqrt{t}$ and suppose $y:[0,1] \to \mathbb{R}$ is piecewise linear with $y(0) = 0$. It follows easily that $\lim_{t\to 0}\frac{\|x(t)-y(t)\|}{\sqrt{t}} = 1$.
This implies that there is no sequence of piecewise linear approximations to $x$ in Hölder norm.
However, it's not too hard to show that $x$ can be approximated in $2$-variation by piecewise linear functions.
My question is the following: Are piecewise linear functions dense among $1/2$-Hölder functions in the $2$-variation sense?
I'm also interested in the same question replacing piecewise linear functions by smooth functions. |
Guessing Game Problem 406
We are trying to find a hidden number selected from the set of integers {1, 2, ...,
n} by asking questions. Each number (question) we ask, we get one of three possible answers:
"Your guess is lower than the hidden number" (and you incur a cost of
a), or
"Your guess is higher than the hidden number" (and you incur a cost of
b), or
"Yes, that's it!" (and the game ends).
Given the value of
n,
a, and
b, an
optimal strategy minimizes the total cost for the worst possible case.
For example, if
n = 5,
a = 2, and
b = 3, then we may begin by asking "
2" as our first question.
If we are told that 2 is higher than the hidden number (for a cost of
b=3), then we are sure that "
1" is the hidden number (for a total cost of 3). If we are told that 2 is lower than the hidden number (for a cost of
a=2), then our next question will be "
4". If we are told that 4 is higher than the hidden number (for a cost of
b=3), then we are sure that "
3" is the hidden number (for a total cost of 2+3= 5). If we are told that 4 is lower than the hidden number (for a cost of
a=2), then we are sure that "
5" is the hidden number (for a total cost of 2+2= 4). Thus, the worst-case cost achieved by this strategy is 5. It can also be shown that this is the lowest worst-case cost that can be achieved. So, in fact, we have just described an optimal strategy for the given values of
n,
a, and
b.
Let $C(n, a, b)$ be the worst-case cost achieved by an optimal strategy for the given values of
n,
a and
b.
Here are a few examples:
$C(5, 2, 3) = 5$ $C(500, \sqrt 2, \sqrt 3) = 13.22073197\dots$ $C(20000, 5, 7) = 82$ $C(2000000, \sqrt 5, \sqrt 7) = 49.63755955\dots$
Let $F_k$ be the Fibonacci numbers: $F_k=F_{k-1}+F_{k-2}$ with base cases $F_1=F_2= 1$.
Find $\displaystyle \sum \limits_{k = 1}^{30} {C \left (10^{12}, \sqrt{k}, \sqrt{F_k} \right )}$, and give your answer rounded to 8 decimal places behind the decimal point. |
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ... |
eQuant aims at providing a fast and easy-to-use service for assessing protein model quality. In addition, lightweight visualizations help to quickly catch the information that is of most importance to the user. If users are interested in inspecting and further processing the made assessments, raw output data is also available for download. A major advantage of eQuant over most available quality assessment programs (MQAPs) is the ability to process structures within only a few seconds. Also no processing parameters have to be provided by the user (force fields, distance cut-offs etc.). In the assessment process, all chains in a structure are considered, including given inter-molecular interactions between residues. Essentially, eQuant employs a set of features and procedures that have proven themselves to be successful in other, existing approaches, but also considers information derived from coarse-grained knowledge-based potentials [Heinke, 2012, Heinke, 2013] as well as residue-residue interaction preferences and packing statistics. These data are obtained for each residue and evaluated by means of the random subspace method [Ho, 1998], which gives a prediction of local per-residue error. For each residue, the corresponding predicted error is an estimate of its C
α deviation between its location in the submitted model and in the unknown native structure. With respect to processing time, the assessment of a normal-sized query structure (a single chain with about 120 residues) requires less than one second of computation. However, like most MQAPs, eQuant is currently limited to processing soluble proteins.
Given all predicted local errors (C
α deviations), an estimate on the global structural match to the unknown native structure can be made; or to put it differently: the deviation between structure model and native structure after superimposition is estimated. Here, the global distance test total score (GDT_TS) score [Kryshtafovych, 2014] is a widely-used measure. Ranging from 0% to 100%, it quantifies the quality of the structural match. Values close to 100% indicate almost perfect structural alignments. Based on predicted per-residue deviations, the GDT can be computed and an overall quantification of the deviation to the unknown native structure estimated. Furthermore, the computed GDT score is re-scaled in order to provide a Z-score. The transformation is calculated by analyzing the population of pre-processed structures with a relative length of ± 10% to the query model structure. If a set of structures is submitted, all models are sorted according to the Z-score and reported in descending order. Thus, the model predicted as most reliable will appear first.
To derive the GDT scores two structures of the same sequence, but with different tertiary structure were superimposed. The GDT score was computed by $$GDT\_TS = \frac{1}{4} \sum_{t=1,2,4,8} \frac{f(t) \cdot 100}{N}$$ with \(f(t)\) referring to the absolute count of residues pairs less than \(t\) Å apart and \(N\) being the number of aligned atom pairs.
As alternative global quality measure the TM-score is provided. It ranges from 0 to 1, whereby values above 0.5 generally indicate the same fold for the submitted and the native structure. Scores close to 1 occur for almost identical structures [Xu, 2010].
The backbone of the utilized assessment routine are the per-residue predictions of C
α deviations of residue locations in the given model and the native structures. eQuant was trained on the CASP9 dataset and evaluated on CASP10 [Kryshtafovych, 2014].
As the first step in the process, the following descriptive features are determined for each residue:
These data provide a numerical description of a residue's environment with respect to observed and expected energies, and thus local stability. The random subspace method [Ho, 1998] is finally employed to predict and report C
α deviations as a measure of 'unnaturalness'. Thus, all predictions are made only by analyzing the submitted model; thus no knowledge and information on the native structure is required. All underlying statistics are gathered from 63 soluble, non-redundant, high-resolution protein structures obtained via the PDB-REPRDB service [Noguchi, 2001]. During training the CASP X data set [Moult, 2014] was used to compute C α-C α distances - the target function. QMEAN [Benkert, 2009; Benkert, 2011; Benkert, 2008] followed a comparable approach during design and strongly influenced the development of this method.
eQuant accepts files containing structure data in RCSB Protein Data Bank format as well as archive files. Supported archive file formats are .tar, .gz, .rar, .zip and .7z. This enables you to submit multiple structures (and structure complexes) at the same time. After processing, all structure assessments are reported on one single page in descending order with respect to global quality expressed by the GDT Z-score.
Simple text files are accessible to export the evaluation results. The SMALL file contains the results in condensed format. Only basic residue information, the actual per-residue error score and a rudimentary interpretation are provided. Thereby, scores exceeding 3.8 Å are considered unreliable. Should you be interested in more detailed data, choose the FULL report file, as it summaries not only the final evaluation scores, but also all information which was gathered during the quality assessment routine. Even though PV [Biasini, 2014] is used for visualization of the structure, you can furthermore download a modified PDB file of the originally submitted structure with evaluation scores written to the B-factor column. Using e.g. PyMOL [DeLano, 2002] you can conveniently create appealing images. Additionally an ZIP-archive is provided, which contains the result files. Last but not least, each figure on the result page can be locally stored by utilizing the browser's capabilities or HighChart's [Highsoft, 2012] context menu in the upper right corner. |
I am familiar with the
\flushright and
\hfill commands for positioning text to the far right of the page. However throughout my document I am using the terms
(x\rightarrow\infty) on many lines and I wish to send them all to the far right of the page so that they line up nicely. Any ideas?
I am familiar with the
If you're using unnumbered equation-like environments, as would seem to be the case, you could use (abuse?!) the
\tag{$ ... $} and
\tag*{$ ... $} macros to place the
x\to\infty material at the far-right edge of the line. The first macro,
\tag, surrounds its argument in parentheses, making it look a little bit like a substitute equation number; the second macro,
\tag*, does not provide parentheses.
\documentclass{article}\usepackage{amsmath}\begin{document}\begin{align*} g(x)&\sim x^\rho\ell(x) \tag{$x\to\infty$}\\g(x)&\sim x^\rho\ell(x) \tag*{$x\to\infty$}\end{align*}\end{document}
Here is a solution with the
flalign* environment and the
mathllap command (from
mathtools). I also propose a second way of writing, which is better, in my opinion.
\documentclass[12pt,a4paper,bothsides]{article}\usepackage[utf8]{inputenc}\usepackage[showframe, nomarginpar]{geometry}\usepackage{mathtools}\begin{document} \begin{flalign*} & & g(x)&\sim x^\rho\ell(x) & & \mathllap{(x\rightarrow\infty)} \end{flalign*}\[ g(x)\sim_{\infty} x^\rho\ell(x) \]%\end{document} |
Although machine learning is great for shape classification, for shape recognition, we must still use the old methods. Methods such as Hough Transform, and RANSAC.
In this post, we’ll look into using Hough Transform for recognizing straight lines. The following is taken from E. R. Davies’ book,
Computer Vision: Principals, Algorithms, Application, Learning and Image Digital Image Processing by Gonzalez and Woods.
Straight edges are amongst the most common features of the modern world, arising in perhaps the majority of manufactured objects and components – not least in the very buildings in which we live. Yet, it is arguable whether true straight lines ever arise in the natural state: possibly the only example of their appearance in virgin outdoor scenes is in the horizon – although even this is clearly seen from space as a circular boundary! The surface of water is essentially planar, although it is important to realize that this is a deduction: the fact remains that straight lines seldom appear in completely natural scenes. Be all this as it may, it is clearly vital both in city pictures and in the factory to have effective means of detecting straight edges. This chapter studies available methods for locating these important features.
Historically, HT has been the main means of detecting straight edges, and since the method was originally invented by Hough in 1962, it has been developed and refined for this purpose. We’re going to concentrate on it on this blog post, and this also prepares you to use HT to detect circles, ellipses, corners, etc, which we’ll talk about in the not-too-distant future. We start by examining the original Hough scheme, even thoupgh it is now seen to be wasteful in computation since it has evolved.
First, let us introduce Hough Transform. Often, we have to work in unstructured environments in which all we have is an edge map and no knowledge about where objects of interest might be. In such situations, all pixels are candidates for linking, and thus have to be accepted or eliminated based on predefined global properties. In this section, we develop an approach based on whether set of pixels lie on curves of a specified shape. Once detected, these curves form the edge or region boundaries of interest.
Given $n$ points in the image, suppose that we want to find subsets of these points that lie on straight lines. One possible soltion is to fine all lines determined by every pair of points, then find all subsets of points that are close to particular lines. This approach involves finding $n(n-1)/2 \sim n^2$ lines, then performing $(n)(n(n-1))/2 \sim n^3$ comparisons for every points to all lines. As you might have guessed, this is extremely computationally expensive task. Imagine this, we check every pixel for neighboring pixels and compare their distance to see if they form a straight line. Impossible!
Hough, as we said, in 1962, proposed an alternative approach to this scanline method. Commonly referred to as the
Hough transform. Let $(x_i, y_i)$ denote p point in the xy-plane and consider the general equation of a straight line in slope-intercept form: $y_i = ax_i + b$. Infinitely many lines pass through $(x_i, y_i)$ but they all satisfy the equation we saw, for varying values of $a$ and $b$. However, writing this equation as $b = -x_i a+y_i$ and considering the ab-plane – also called parameter space, yields the equation of a single line in parameter space associated with it, which intersects the line associated with $(x_i, y_i)$ at some point $(a\prime, b\prime)$ in parameter space, where $a\prime$ is the slope and $b\prime$ is the intercept of the line containing the both $(x_i, y_i)$s in the xy-plane, and of course, we are assuming that lines are not parallel, in fact, all points on this line have lines in parameter space that intersect at $(a\prime, b\prime)$. Here, this figure illustrates s the concepts:
In principle, the parameter space lines corresponding to all points $(x_k, y_k)$ in the xy-plane could be plotted, and the principal (goddammit, principle, principal, fuck this language!) lines in that plane could be found by identifying points in parameter space where large numbers of parameter-space lines intersect. However, a difficulty with this approach is that $a$, approaches infinity as the lines approaches vertical direction. One way around this difficulty is to use the
normal representation of a line:
\[ x \cos(\theta) + y sin(\theta) = \rho \]
Figure on the right below demonstrates the geometrical interpretation of parameters $\rho$ and $\theta$. A horizontal line has $\theta = 0^\circ$, with $\rho$ being equal to the positive x-intercept. Similarly, a vertical line has $\theta = 90^\circ$, with $\rho$ being equal to positive y-intercept. Each sinusoidal curve in the middle of the figure below represents the family of lines that pass through a particular point $(x_k. y_k)$ in xy-plane.
Let’s talk about the properties of Hough transform. Figure below illustrates the Hough transform based on the equation above.
On the top, you see an image of size $M\times M \bigvee M=101$ with five labeled white points, and below it shows each of these points mapped into the parameter space, $\rho\theta$-plane using subdivisions of one unit for the $\rho$ and $\theta$ axes. The range of $\theta$ values is $\pm 90^\circ$ and the range of $\rho$ values is $\pm \sqrt{2} M$ As the bottom image shows, each curve has a different sinusoidal shape. The horizontal line resulting from the mapping of point 1 is a sinusoid of zero amplitude.
The points labeled A and B in the image on the bottom illustrate the colinearity detection property of the Hough transform. For exampele, point B marks the intersection of the curves corresponding to points 2, 3, and 4 in the xy image plane. The location of point A indicates that these thre points line on a straight line passing through the origin $(\rho = 1)$ and oriented at $-45^\circ$. Similarly, the curves intersecting at point B in parameter space indicate that 2, 3, and 4 line on a straight like oriented at $45^\circ$, and whose distance from origin is $\rho = 71$. Finally, Q, R, and S illustrate the fact that Hough transform exhibits a reflective adjacency relationship at the right and left edges of the parameter space.
Now that we know the basics of HT and line detection using HT, let’s take a look at Longitudinal Line Localization.
The previous method is insensitive to where along the infinite idealized line an observed segment appear. He reason for this is that we only have two parameters, $\rho$ and $\theta$.There is some advantage to be gained in this, in parital occlusion of line does not prevent its detection: indeed, if several segments of a line are visible, they can all contribute to the peak in parameter space, hence improving senitivity. On the other hand, for full image interpretation, it is useful to have information about the longitudinal placement of line segments.
Ths is achieved by a further stage of processing. The additional stage involves finding which points contributed to each peak in the main parameter space, and carrying out connectivity analysis in each case. Some call this process xy-gruping. It is not vital that the line segments should be 4-connected (meaning, a neighborhood with only the vertical and horizontal neighbors) or 8-connected (with diagonal neighbors) – just that there should be sufficient points on them so that adjacent points are withing a threshold distance apart, i.e. groups of points arem erged if they are withing prespecified distance. Finally, segments shorter than a certain minimum length can be ignored too insignificant to help with image interpretation.
The alternative method for saving computation time is the Foot-of-Normal method. Created by the author of book I’m quoting from, it eliminates the use of trigonometric functions such as arctan by employing a different parametrization scheme. Both the methods we’ve described employ abstract parameter spaces in which poitns bear no immediately obvious relation to image space. N the alternative scheme, the parameter spaces in which points bear no immediately obvious visual relation to image space. In this alternative scheme, the parameter space is a second image space, whifcfh is congruent to image space.
This type of parameter space is obtained in the following way. First, each edge fragment in the image is produced much as required previously so that $\rho%” can be measured, but this time the foot of the normal from the origin is taken as a voting position in the parameter space. Taking %(x_0, y_0) as the foot of the normal from the origin to the relevant line, it is found that:
\[b/a = y_0/x_0 \]
\[(x-x_0)x_0 + (y-y_o)y_0 \]
Thes etwo equations are sufficient to compute the two coordinates, $(x_0, y_0)$. Solving for $x_0$ and $y_0$ gives:
\[ x_0 = va \]
\[y_0 = vb \]
Where:
\[ \frac{ax + by}{a^2 + b^2} \]
Well, we’re done for now! It’s time to take a shower, then study regression, as I’m done with classification. I’m going to write a post about regression, stay tuned! |
1. Definition
We have seen that a matrix $A$ multiplying a vector $\vec{x}$ is in fact a linear transformation.
We now are interested in the vector $\vec{x}$ whose image $\vec{y}$ (under the linear transformation) is linearly dependent to itself, namely :
$\lambda$ and $\vec{x}$ satisfying $(1)$ are called respectively
eigenvalues and eigenvectors.
In $(1)$ $\boldsymbol{\vec{0}}$ is
excluded. Indeed we know anyway that $\underbrace{A\,\vec{0}}_{ \vec{0} } = \underbrace{ \lambda\, \vec{0} }_{ \vec{0} } \,$ is true for any value of $\lambda$. 2. Calculation
From $(1)$ we can obtain :
In $(5)$, the multiplication matrix by vector of $(4)$ has been decomposed. The columns $\vec{C_1}, \dots, \vec{C_n}$ must be linearly dependent since the $x_i$ cannot equal zero because of $(1)$. Therefore their determinant equal zero.
Once the $\lambda$’s (eigenvalues) are found by solving the determinant, we replace them in the matrix in $(4)$ for determining the associated eigenvectors.
Example I
Let $h$ : $h \colon \, \mathbb{R}^2 \to \mathbb{R}^2$, $ \vec{x} \mapsto \left( \begin{smallmatrix} 2 & 1 \\ 3 & 4 \end{smallmatrix} \right) \vec{x}$.
Let’s determine its eigenvalues and eigenvectors.
Based on $(4)$, we have :
The columns of the matrix in $(6)$ are linearly dependent, thus :
From $(7)$, we get : $\lambda_1 =1$ and $\lambda_2 =5$
To determine the eigenvector $\vec{x}_1$ associated to $\lambda_1$, we replace $\lambda_1$ in $(6)$ :
$ \iff 1x+1y = 0 \iff x=-y$. Let $x= \beta$.
To determine the eigenvector $\vec{x}_2$ associated to $\lambda_2$, we replace $\lambda_2$ in $(6)$ :
$ \iff -3x+1y = 0 \iff y=3x \iff x=\frac{y}{3}$. Let $x= \beta$.
3. But more concretely ?
Figure 10.1 shows a shape before and after the transformation (in dashed line) under the linear transformation of example I.
The eigenvector $\,\color{green}{\vec{x_1}}=\left(\begin{smallmatrix} -1 \\ -3 \end{smallmatrix}\right)$, as well as the ordinary vector $\color{navy}{\vec{x_2}}=\left(\begin{smallmatrix} 2 \\ 1 \end{smallmatrix}\right)$ are represented too. The
eigenvector does not rotate under the linear transformation. Recapitulation
An
eigenvector is a vector whose image (under the linear transformation) is linearly dependent to itself, namely $\underbrace{ A\vec{x} }_{ \vec{y} } = \lambda \vec{x}$.
First the
eigenvalues are found by solving the characteristic polynomial and after that the associated eigenvectors can be found.
The
characteristic polynomial is obtained by computing the determinant of the matrix $A -\lambda I\,$.
By definition, an eigenvector does not equal to the null vector. |
Introduction to Binary
concept
In the digital world we deal exclusively with "on" and "off" as our only two states. This makes a number system with only two numbers a natural choice. In this topic we'll cover the binary number system, what it is and how to convert between it and our usual decimal numbers.
Our usual number system that we use day to day is called the "decimal" system. This comes from the Latin "decimus" meaning "tenth" because each place can have 10 different values. When you write a decimal number each digit has a "place value". So the number "15" in decimal means: $$15 = 5\times 1 + 1 \times 10$$ If we created a new number system that only allowed each digit to be a 0 or a 1 we'd have the binary system. When we write a binary number we often follow it with a subscript of "2" to indicate that it's binary; when we write a decimal number we follow it with a subscript "10" to show that it's decimal. In the binary case the number "101" would be: $$101_2 = 1\times 1 + 0\times 2 + 1\times 4$$ Where the "place values" are increasing powers of two rather than powers of ten as in the decimal case.
fact
To convert a binary number to decimal multiply each of the digits by its "place value" (\(2^n\) where \(n\) is the number of digits from the right starting at 0) and sum them.
fact
The first 8 place values (always counting from the rightmost digit) are:
\(2^0\) 1 \(2^1\) 2 \(2^2\) 4 \(2^3\) 8 \(2^4\) 16 \(2^5\) 32 \(2^6\) 64 \(2^7\) 128 \(2^8\) 256
You've likely seen these numbers plenty of times when dealing with technology, you'll soon become very familiar with them as you learn to work with digital electronics. You may have seen the number 255 used frequently with digital things, this is because 255 is the largest number that will fit into an 8 digit binary number (starting at 0). An 8 digit binary number is called a "byte" (with each digit called a "bit"), and it's historically the most common size of a storage unit in a digital system.
example
Convert \(1011_2\) to decimalThe place values (starting from the rightmost digit) are: \(1, 2, 4, 8\) So our number is: $$1011_2 = 1\times 1 + 1\times 2 + 0\times 4 + 1\times 8$$ $$1011_2 = 11_{10}$$
example
Convert \(00001011_2\) to decimalYou may notice that reading from right to left this is the same number as in the example above. Adding leading 0s to a binary number has no effect, just like adding leading 0s to a decimal number has no effect. Many times a binary number will be shown with 8 digits (or 16 or 32 or 64) by packing extra 0s on the left. So \(00001011_2 = 11_{10}\).
Converting a binary number to decimal is fairly straightforward, we just multiply a 1 or a 0 by its place value and add all the results together. Converting a decimal number to binary is a little more involved, there are more steps to the process but the process isn't that complicated.
example
Say we have the number \(200_{10}\) and we want to turn it into a binary number, figuring out whether we need a "2" or a "4" is tricky just by looking at it but it turns out that if we always add the largest power of two we can we won't be wrong. So for instance the largest power of two that is less than 200 is 128. We write down: \(10000000_2\) as our start for the binary value, now we have to add 1s to make up the remaining \(200-128=72\) left. Now we do that same again, the largest power of two less than 72 is 64 so we add a 1 at the 64 place: \(11000000_2\) and we have to make up the other \(72-64 = 8\). Now 8 is a power of two so we add a 1 to the 8 place: \(11001000_2 = 200_{10}\) and we're done.
fact
To convert a decimal number to binary start with your binary number \(B = 0\) and your decimal number \(D\) which is given. Find the largest power of two \(P \leq D\) Add a 1 to \(B\) in the place for \(P\) and set \(D = D-P\) Repeat until \(D = 0\).
example
Convert \(216_{10}\) to binaryWe start with \(B = 0\) and \(D = 216_{10}\) The largest power of two less than \(D\) is \(P = 128_{10} = 10000000_2\) So \(B = 10000000_2\) and \(D = 216_{10} - 128_{10} = 88_{10}\) Lather, rinse, repeat. \(P = 64_{10} = 1000000_2\) \(B = 11000000_2,\quad D = 88_{10} - 64_{10} = 24_{10}\) \(P = 16_{10} = 10000_2\) \(B = 11010000_2,\quad D = 24_{10}-16_{10} = 8_{10}\) \(P = 8_{10} = 1000_2,\quad D = 8_{10}-8_{10} = 0\) \(B = 11011000_2\) And we're done so: \(216_{10} = 11011000_2\)
Now counting in binary takes a little getting used to but you'll get the hang of it. To add 1 we start at the right, if that digit is a 0 we change it to a 1 and we're done. If that digit is a 1 we change it to a zero and add 1 to the next digit to the left using the same system (if it's 0 make it a 1, if it's a 1 make it 0 and increment the next digit).
fact
The first 8 numbers in binary are:
Decimal Binary 0 0 1 1 2 10 3 11 4 100 5 101 6 110 7 111
fact
Another popular method of converting a decimal number to binary is called the SOAR table. SOAR stands for: Step Operation Answer Result The steps are:
Divide your decimal number by 2 Write the integer part of the answer under the "Answer" column Write the remaineder part of the answer under the "Remainder" column Repeat step 1 with the integer part of your answer until \(A=0\) The binary number is the digits in the "Remainder" column read bottom up
example
Find \(29_{10}\) in binary using the SOAR tableWe'll set up our table to begin:
S O A R 1 \(\frac{29}{2}\) 14 1 2 \(\frac{14}{2}\) 7 0 3 \(\frac{7}{2}\) 3 1 4 \(\frac{3}{2}\) 1 1 5 \(\frac{1}{2}\) 0 1
practice problems |
If $B$ is a square matrix whose entries are integers, then the determinant of $B$ is an integer.
The inverse matrix of $A$ can be computed by the formula\[A^{-1}=\frac{1}{\det(A)}\Adj(A).\]
Proof.
Let $I$ be the $n\times n$ identity matrix.
$(\implies)$: If $A^{-1}$ is an integer matrix, then $\det(A)=\pm 1$
Suppose that every entry of the inverse matrix $A^{-1}$ is an integer.It follows that $\det(A)$ and $\det(A^{-1})$ are both integers.Since we have\begin{align*}\det(A)\det(A^{-1})=\det(AA^{-1})=\det(I)=1,\end{align*}we must have $\det(A)=\pm 1$.
$(\impliedby)$: If $\det(A)=\pm 1$, then $A^{-1}$ is an integer matrix
Suppose that $\det(A)=\pm 1$. The inverse matrix of $A$ is given by the formula\[A^{-1}=\frac{1}{\det(A)}\Adj(A),\]where $\Adj(A)$ is the adjoint matrix of $A$.Thus, we have\[A^{-1}=\pm \Adj(A).\]Note that each entry of $\Adj(A)$ is a cofactor of $A$, which is an integer.
(Recall that a cofactor is of the form $\pm \det(M_{ij})$, where $M_{ij}$ is the $(i, j)$-minor matrix of $A$, hence entries of $M_{ij}$ are integers.)
Therefore, the inverse matrix $A^{-1}$ contains only integer entries.
Find Inverse Matrices Using Adjoint MatricesLet $A$ be an $n\times n$ matrix.The $(i, j)$ cofactor $C_{ij}$ of $A$ is defined to be\[C_{ij}=(-1)^{ij}\det(M_{ij}),\]where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column.Then consider the $n\times n$ matrix […]
For Which Choices of $x$ is the Given Matrix Invertible?Determine the values of $x$ so that the matrix\[A=\begin{bmatrix}1 & 1 & x \\1 &x &x \\x & x & x\end{bmatrix}\]is invertible.For those values of $x$, find the inverse matrix $A^{-1}$.Solution.We use the fact that a matrix is invertible […]
Quiz 4: Inverse Matrix/ Nonsingular Matrix Satisfying a Relation(a) Find the inverse matrix of\[A=\begin{bmatrix}1 & 0 & 1 \\1 &0 &0 \\2 & 1 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.(b) Find a nonsingular $2\times 2$ matrix $A$ such that\[A^3=A^2B-3A^2,\]where […]
Find All Values of $x$ so that a Matrix is SingularLet\[A=\begin{bmatrix}1 & -x & 0 & 0 \\0 &1 & -x & 0 \\0 & 0 & 1 & -x \\0 & 1 & 0 & -1\end{bmatrix}\]be a $4\times 4$ matrix. Find all values of $x$ so that the matrix $A$ is singular.Hint.Use the fact that a matrix is singular if and only […]
Nilpotent Matrices and Non-Singularity of Such MatricesLet $A$ be an $n \times n$ nilpotent matrix, that is, $A^m=O$ for some positive integer $m$, where $O$ is the $n \times n$ zero matrix.Prove that $A$ is a singular matrix and also prove that $I-A, I+A$ are both nonsingular matrices, where $I$ is the $n\times n$ identity […] |
Equivalences:
The non-orthogonal vectors problem (as defined above) for a set $S$ of $n$ Boolean
vectors each of length $d$ and a positive integer $k$ is equivalent
the following:
Finding a $2$ by $k$ submatrix of 1's in a given $n$ by $d$ Boolean matrix.
Finding a $\mathrm{K}_{2,k}$ complete subgraph in a given bipartite graph where the first vertex set has size $n$ and the second vertex
set has size $d$.
Naive Algorithm:
The naive approach for the non-orthogonal vectors problem runs in $O(d \cdot n^2)$ time because it takes $O(d \cdot n^2)$ time to naively compute the dot product of every pair of vectors.
Answer to questions (2) & (3):
Yes, there are several algorithms that are more efficient in different
cases.
First approach:
We can solve the non-orthogonal vectors problem in $O(d \cdot n + k \cdot n^2)$ time.
Note: Since the dot product of two length $d$ Boolean vectors must be bounded by $d$, the problem only makes sense when $k \leq d$.
Proof. Let a set $S$ of $n$ Boolean vectors each of length $d$ and a positive integer $k$ be given. Consider an enumeration $\{s_i\}_{i\in[n]}$of the elements of $S$.
Create a hashmap $m$ from pairs $(a,b) \in [n] \times [n]$ to $\mathbb{N}$. Initially, $m$ maps each input to the value 0.
For each $i \in [d]$, we do the following. Enumerate through pairs of vectors $s_a$, $s_b$ such that $a < b$, the $i$th bit of $s_a$ is 1, and the $i$th bit of $s_b$ is 1. For each such $s_a$ and $s_b$ if $m(a,b) = k - 1$, then $s_a$ and $s_b$ are non-orthogonal i.e. $s_a \cdot s_b \geq k$. Otherwise, increment $m(a,b)$ and continue.
If we finish the enumeration, then no pair of vectors are non-orthogonal.
It takes $O(n \cdot d)$ time to scan through every bit of every vector. Then, it takes additional time for enumerating pairs of vectors. Because there are at most ${n \choose 2}$ pairs of vectors and each pair can show up at most $k-1$ times before they've been shown to be non-orthogonal, enumerating pairs takes at most $O(k \cdot n^2)$ time. Therefore, the total runtime is $O(d \cdot n + k \cdot n^2)$.
Note: When $k = 2$, we can improve this approach to $O(n \cdot d)$ time.
This is because when $k = 2$, we can reduce finding a pair of non-orthogonal vectors among $n$ Boolean vectors of length $d$ to finding a pair of non-orthogonal vectors among $d$ Boolean vectors of length $n$.
Second approach:
We can solve the non-orthogonal vectors problem in $O(k \cdot {d \choose k} \cdot n)$ time.
Proof. Let a set $S$ of $n$ Boolean vectors each of length $d$ and a positive integer $k$ be given.
Enumerate through sets $P \subseteq [d]$ such that $P$ has size $k$. For every vector $v \in S$, check if $v$ has all 1's at the positions in $P$. It there are two vectors that have all 1's at the positions in $P$, then we've found two non-orthogonal vectors.
In total, there are ${d \choose k}$ possible choices for $P$. And, for each choice, we scan through $k \cdot n$ bits from the vectors. Therefore, in total, the runtime is $O(k \cdot {d \choose k} \cdot n)$.
Third approach:
When $d \leq n$, we can solve the non-orthongal vectors problem in $O(d^{\omega - 2} \cdot n^2)$ time where $\omega$ is the exponent for integer matrix multiplication. When $d > n$, we can solve the non-orthongal vectors problem in $O(d \cdot n^{\omega - 1})$ time.
Note: As pointed out by @Rasmus Pagh, we can improve this algorithm to $O(n^{2 + o(1)})$ time when $d \leq n^{0.3}$. See here for more info: https://arxiv.org/abs/1204.1111
Proof. Let a set $S$ of $n$ Boolean vectors each of length $d$ and a positive integer $k$ be given.
Consider matrices $A$ and $B$. The first matrix $A$ has dimensions $n$ by $d$ where each row of $A$ is a vector from $S$. The second matrix $B$ has dimensions $d$ by $n$ where each column of $B$ is a vector from $S$.
We can compute the dot product of every pair of vectors in $S$ by computing $A \cdot B$ using algorithms for fast integer matrix multiplication.
When $d \leq n$, one approach is to convert the rectangular matrix multiplication into $(\frac{n}{d})^2$ multiplications of square $d$ by $d$ matrices. By using fast square matrix multiplication, we can compute all of the multiplications in $O((\frac{n}{d})^2 \cdot d^{\omega}) = O(d^{\omega - 2} \cdot n^2)$ time.
When $d > n$, one approach is to convert the rectangular matrix multiplication into $\frac{d}{n}$ multiplications of square $n$ by $n$ matrices. By using fast square matrix multiplication, we can compute all of the multiplications in $O((\frac{d}{n}) \cdot n^{\omega}) = O(d \cdot n^{\omega - 1})$ time. |
There is at Least One Real Eigenvalue of an Odd Real Matrix Problem 407
Let $n$ be an odd integer and let $A$ be an $n\times n$ real matrix.
Prove that the matrix $A$ has at least one real eigenvalue. Proof 1.
Let $p(t)=\det(A-tI)$ be the characteristic polynomial of the matrix $A$.
It is a degree $n$ polynomial and the coefficients are real numbers since $A$ is a real matrix. Since $n$ is odd, the leading term of $p(t)$ is $-t^n$. That is, we have \[p(t)=-t^n+\text{lower terms}.\] (Note: if you use the alternative definition of characteristic polynomial $p(t)=\det(tI-A)$, then the leading term is $t^n$. You can easily modify the proof for this case.)
Therefore, as $t$ increases the polynomial $p(t)$ tends to $-\infty$:
\[\lim_{t\to \infty}=-\infty.\] Similarly, we have \[\lim_{t \to -\infty}=\infty.\] By the intermediate value theorem, there is $-\infty < \lambda < \infty$ such that$p(\lambda)=0$. Since a root of $p(t)$ is an eigenvalue of $A$, we have obtained a real eigenvalue $\lambda$ of $A$. Proof 2.
Let $p(t)=\det(A-tI)$ be the characteristic polynomial of the matrix $A$ and write it as
\[p(t)=\prod_{i=1}^k(\lambda_i-t)^{n_i},\] where $\lambda_i$ are distinct eigenvalues of $A$ and $n_i$ is the algebraic multiplicity of $\lambda_i$.
Since $A$ is a real matrix, the coefficients of $p(t)$ are real numbers. So we have
\[\overline{p(t)}=p(\,\bar{t}\,).\] Thus, we have \begin{align*} p(\bar{t})&=\overline{p(t)}=\prod_{i=1}^k(\,\bar{\lambda}_i-\bar{t}\,)^{n_i}. \end{align*} Replacing $\bar{t}$ by $t$, we obtain \begin{align*} p(t)=\prod_{i=1}^k(\bar{\lambda}_i-t)^{n_i}. \end{align*} This yields that the complex conjugate $\bar{\lambda}_i$ is an eigenvalue with algebraic multiplicity $n_i$. Hence each non-real eigenvalue $\lambda$ appears as a pair $(\lambda, \bar{\lambda})$ counting multiplicity. Thus, there are even number of non-real eigenvalues of $A$.
The number of all eigenvalues of $A$ counting multiplicities is $n$.
Since $n$ is odd, there must be at least one real eigenvalue of $A$.
Add to solve later |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
I am interested in deriving the convergence rate of the smallest eigenvalue of a sequence of random matrices with diverging dimension. More precisely, let $W_n(r)$ represent an $n$-dimensional standard brownian motion at time $r$, and define $\lambda_1(A)$ as the minimum eigenvalue of A. Then, I would like to know how fast does $\lambda_1\left(\int_0^1 W_n(r)W_n^\prime(r)dr\right)$ converge to zero in probability, as $n \to \infty$.
First of all, the reason that I believe that the minimum eigenvalue converges to zero in probability is given by the following argument, which was kindly explained to me by fellow MathOverflow user Nate Eldredge. Let $E = \lbrace e_1,\ldots,e_n\rbrace$ represent the collection of basis vectors spanning $\mathbb{R}^n$. Then, for any $\epsilon > 0$,
$\mathbb{P}\left(\lambda_1\left(\int_0^1 W_n(r)W_n^\prime(r)dr\right) > \epsilon \right) = \mathbb{P}\left(\underset{x \in \mathbb{R}^n\setminus \lbrace 0 \rbrace}{\inf}\frac{x^\prime\left(\int_0^1 W_n(r)W_n^\prime(r)dr\right)x}{x^\prime x} > \epsilon \right) \leq \mathbb{P}\left(\underset{x \in E}{\inf} x^\prime\left(\int_0^1 W_n(r)W_n^\prime(r)dr\right)x > \epsilon \right) = \mathbb{P}\left(\underset{1 \leq i \leq n}{\min} \int_0^1 W_{n,i}^2(r)dr > \epsilon \right) = \mathbb{P}\left(\int_0^1 W_{n,1}^2(r)dr > \epsilon \right)^n \to 0,$
as $n \to \infty$. The last inequality follows from the $i.i.d.$-ness of the elements in $W_n(r) = (W_{n,1}(r),\ldots,W_{n,n}(r))^\prime$. Then, what I would like to know essentially is if there exists an increasing function in $n$, say $f(n)$, such that $\mathbb{P}\left(f(n)\times\lambda_1\left(\int_0^1 W_n(r)W_n^\prime(r)dr\right) > \epsilon \right) \to 1$, and, if so, how does $f(n)$ look like?
Here is my thought process so far:
From Theorem 10 in Tolmatz (2002), it can be shown that $\mathbb{P}\left(n^a \times \int_0^1 W_{n,1}^2(r)dr > \epsilon \right)^n \to 1$, such that $f(n) = n^a$ would be sufficient to show that the upper bound that I used before converges to zero in probability slower than $n^a$. Unfortunately, what I am really after is showing that the minimum eigenvalue converges to zero in probability
sufficiently slow. Hence, if I could find an analytically manageable lower bound for the minimum eigenvalue and show the rate at which it goes to zero (from above), my problem would be solved. The minimum eigenvalue bounds that I could find, such as the one based on Gershgorin's circle theorem, are not strict enough, as they lead to negative lower bounds.
Hope somebody can help me with this issue. Any suggestions, either in the form of a nice minimum eigenvalue bound or perhaps in terms of a different strategy to derive my answer, would be sincerely appreciated.
Best,
Etienne
Cited article: Tomatz (2002) On the distribution of the square integral of the Brownian Bridge.
The Annals of Probability, 30, 253-269 |
Fibonacci golden nuggets Problem 137
Consider the infinite polynomial series $A_F(x) = x F_1 + x^2 F_2 + x^3 F_3 + \dots$, where $F_k$ is the $k$th term in the Fibonacci sequence: $1, 1, 2, 3, 5, 8, \dots$; that is, $F_k = F_{k-1} + F_{k-2}$, $F_1 = 1$ and $F_2 = 1$.
For this problem we shall be interested in values of $x$ for which $A_F(x)$ is a positive integer.
Surprisingly $\begin{align*} A_F(\tfrac{1}{2}) &= (\tfrac{1}{2})\times 1 + (\tfrac{1}{2})^2\times 1 + (\tfrac{1}{2})^3\times 2 + (\tfrac{1}{2})^4\times 3 + (\tfrac{1}{2})^5\times 5 + \cdots \\ &= \tfrac{1}{2} + \tfrac{1}{4} + \tfrac{2}{8} + \tfrac{3}{16} + \tfrac{5}{32} + \cdots \\ &= 2 \end{align*}$
The corresponding values of
x for the first five natural numbers are shown below.
$x$ $A_F(x)$ $\sqrt{2}-1$ 1 $\tfrac{1}{2}$ 2 $\frac{\sqrt{13}-2}{3}$ 3 $\frac{\sqrt{89}-5}{8}$ 4 $\frac{\sqrt{34}-3}{5}$ 5
We shall call $A_F(x)$ a golden nugget if $x$ is rational, because they become increasingly rarer; for example, the 10th golden nugget is 74049690.
Find the 15th golden nugget. |
Well-posedness and ill-posedness for the 3D generalized Navier-Stokes equations in $\dot{F}^{-\alpha,r}_{\frac{3}{\alpha-1}}$
1.
School of Mathematics and Statistics, Jiangsu Normal University, Xuzhou, Jiangsu 221116, China
2.
School of Mathematics and Statistics, Central China Normal University, Wuhan, Hubei 430079, China
dichotomyof well-posedness and ill-posedness depending on $r$. Specifically, by combining the new endpoint bilinear estimates in $L^{\!q_\alpha}_x\!L^2_T$ and $L^\infty_T\dot{F}^{-\alpha,1}_{q_\alpha}$ and characterization of the Triebel-Lizorkin spaces via fractional semigroup, we prove well-posedness of the gNS in $\dot{F}^{-\alpha,r}_{q_\alpha}$ for $r\in[1,2]$. Meanwhile, for any $r\in(2,\infty]$, we show that the solution to the gNS can develop norm inflationin the sense that arbitrarily small initial data in $\dot{F}^{-\alpha,r}_{q_\alpha}$ can produce arbitrarily large solution after arbitrarily short time. Keywords:wellposedness, ill-posedness., Triebel-Lizorkin space, Generalized Navier-Stokes equations. Mathematics Subject Classification:76D03, 35Q3. Citation:Chao Deng, Xiaohua Yao. Well-posedness and ill-posedness for the 3D generalized Navier-Stokes equations in $\dot{F}^{-\alpha,r}_{\frac{3}{\alpha-1}}$. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 437-459. doi: 10.3934/dcds.2014.34.437
References:
[1] [2] [3]
M. Cannone, "Ondelettes, Paraproduits et Navier-Stokes,",
[4]
M. Cannone,
[5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
P. Li and Z. C. Zhai,
[15]
J.-L. Lions, "Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires,",
[16]
R. May,
[17] [18] [19]
F. Planchon, "Solutions globales et comportement asymptotique pour les équations de Navier-Stokes,",
[20]
E. M. Stein, "Harmonic Analysis: real-Variable Methods, Orthogonality, and Oscillatory Integrals,",
[21] [22]
J. H. Wu,
[23]
J. Xiao,
[24]
X. Yu and Z. Zhai,
[25]
show all references
References:
[1] [2] [3]
M. Cannone, "Ondelettes, Paraproduits et Navier-Stokes,",
[4]
M. Cannone,
[5] [6] [7] [8] [9] [10] [11] [12] [13] [14]
P. Li and Z. C. Zhai,
[15]
J.-L. Lions, "Quelques Méthodes de Résolution des Problèmes aux Limites Non Linéaires,",
[16]
R. May,
[17] [18] [19]
F. Planchon, "Solutions globales et comportement asymptotique pour les équations de Navier-Stokes,",
[20]
E. M. Stein, "Harmonic Analysis: real-Variable Methods, Orthogonality, and Oscillatory Integrals,",
[21] [22]
J. H. Wu,
[23]
J. Xiao,
[24]
X. Yu and Z. Zhai,
[25]
[1] [2]
Hi Jun Choe, Bataa Lkhagvasuren, Minsuk Yang.
Wellposedness of the Keller-Segel Navier-Stokes equations in the critical Besov spaces.
[3]
Yoshihiro Shibata.
On the local wellposedness of free boundary problem for the Navier-Stokes equations in an exterior domain.
[4]
Daoyuan Fang, Ruizhao Zi.
On the well-posedness of inhomogeneous hyperdissipative Navier-Stokes equations.
[5] [6]
Matthias Hieber, Sylvie Monniaux.
Well-posedness results for the Navier-Stokes equations in the rotational framework.
[7] [8]
Jishan Fan, Yasuhide Fukumoto, Yong Zhou.
Logarithmically improved regularity criteria for the generalized Navier-Stokes and related equations.
[9] [10] [11]
Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado.
On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions.
[12]
Bin Han, Changhua Wei.
Global well-posedness for inhomogeneous Navier-Stokes equations with logarithmical hyper-dissipation.
[13]
Daniel Coutand, J. Peirce, Steve Shkoller.
Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains.
[14]
Weimin Peng, Yi Zhou.
Global well-posedness of axisymmetric Navier-Stokes equations with one slow variable.
[15]
Yoshihiro Shibata.
Local well-posedness of free surface problems for the Navier-Stokes equations in a general domain.
[16]
Ben-Yu Guo, Yu-Jian Jiao.
Mixed generalized Laguerre-Fourier spectral method for exterior
problem of Navier-Stokes equations.
[17] [18] [19] [20]
C. Foias, M. S Jolly, I. Kukavica, E. S. Titi.
The Lorenz equation as a metaphor for the Navier-Stokes equations.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Special isosceles triangles Problem 138
Consider the isosceles triangle with base length, $b = 16$, and legs, $L = 17$.
By using the Pythagorean theorem it can be seen that the height of the triangle, $h = \sqrt{17^2 - 8^2} = 15$, which is one less than the base length.
With $b = 272$ and $L = 305$, we get $h = 273$, which is one more than the base length, and this is the second smallest isosceles triangle with the property that $h = b \pm 1$.
Find $\sum L$ for the twelve smallest isosceles triangles for which $h = b \pm 1$ and $b$, $L$ are positive integers. |
I am your Father!
Time Limit: 2000/1000 MS (Java/Others)
Memory Limit: 65536/65536 K (Java/Others)
Description
> Darth Vader: "Obi-Wan never told you what happened to your father."
> > Luke Skywalker: "He told me enough! He told me you killed him!" > > Darth Vader: "No, I am your father." > > — Vader and Luke, on Cloud City A list of $n$ force-aware males numbered $1$ through $n$ were found. They are the chosen ones that will bring balance to the force. Being listed at the first place, Anakin Skywalker is the ancestor of all rest $n-1$ persons. Interestingly, everyone else claims he is the father of some others, causing serious troubles. Fortunately, you found the list also comes with these claims and their likelihood. Your task is to find the true father of Nikana, the last one in the list (numbered $n$). There are $m$ claims in the list. The $i$-th claim consists of three integers: $x_i$, $y_i$, and $w_i$, indicating that the $x_i$-th person claims he is the father of the $y_i$-th person, with a likelihood of $w_i$. Your task is to find a global assignment that assigns each person (except Anakin Skywalker) to someone in the list, i.e., find $f(u)$ such that: 1. Everyone is assigned a father, i.e., $f(u)\in\{1,2,\ldots,n\}$ for all $u\in\{2,3,\ldots,n\}$. 2. Each one's assigned father claims their relationship, i.e., for all $u$, there exists a claim $i$ in the claims such that $f(u) = x_i\land u=y_i$. 3. Nobody is an ancestor of himself in the assignment, directly or indirectly. 4. The assignment maximizes the sum of the likelihood of the father-and-son relationships, i.e., $W=\sum_{i} w_i$ if $f(u)=x_i\land u=y_i$ is in the assignment. You should find the father of Nikana (the person numbered $n$) in such an optimized assignment. If multiple assignments have the same optimal likelihood $W$, you should find the assignment that minimizes the lexical number of his father, i.e., minimizes $f(n)$ at the same time has an optimal assignment likelihood $W$. That makes Nikana closer to Anakin Skywalker. Input
There are multiple test cases in the input file. The first line of the input gives the number of test cases $T$, then followed by $T$ test cases.
The first line of a test case contains $n$ ($1\le n\le 10^3$) and $m$ ($m\le 10^4$), the number of persons and the number of claims, respectively. Then $m$ lines follows. The $i$-th line contains three integers: $x_i$, $y_i$, and $w_i$ indicating the claimed father, son, and likelihood. $1\le w_i\le 100$ is guaranteed. Nobody will claim someone as his son twice. Output
For each test case, output one line containing two space-separated integers, the maximum likelihood $W$ and the father of Nikana.
Sample Input
2
3 3
1 2 10
1 3 10
2 3 10
3 3
1 2 10
1 3 10
2 3 11
Source
2017 Multi-University Training Contest - Te |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
Learning Objectives
Explain the following laws within the Ideal Gas Law
The atmosphere of Venus is markedly different from that of Earth. The gases in the Venusian atmosphere are \(96.5\%\) carbon dioxide and \(3\%\) nitrogen. The atmospheric pressure on Venus is roughly 92 times that of Earth, so the amount of nitrogen on Venus would contribute a pressure well over \(2700 \: \text{mm} \: \ce{Hg}\). And there is no oxygen present, so we couldn't breathe there. Not that would want to go to Venus - the surface temperature is usually over \(460^\text{o} \text{C}\).
Dalton's Law of Partial Pressures
Gas pressure results from collisions between gas particles and the inside walls of their container. If more gas is added to a rigid container, the gas pressure increases. The identities of the two gases do not matter. John Dalton, the English chemist who proposed the atomic theory, also studied mixtures of gases. He found that each gas in a mixture exerts a pressure independently of every other gas in the mixture. For example, our atmosphere is composed of about \(78\%\) nitrogen and \(21\%\) oxygen, with smaller amounts of several other gases making up the rest. Since nitrogen makes up \(78\%\) of the gas particles in a given sample of air, it exerts \(78\%\) of the pressure. If the overall atmospheric pressure is \(1.00 \: \text{atm}\), then the pressure of just the nitrogen in the air is \(0.78 \: \text{atm}\). The pressure of the oxygen in the air is \(0.21 \: \text{atm}\).
The
partial pressure of a gas is the contribution that gas makes to the total pressure when the gas is part of a mixture. The partial pressure of nitrogen is represented by \(P_{N_2}\). Dalton's law of partial pressures states that the total pressure of a mixture of gases is equal to the sum of all of the partial pressures of the component gases. Dalton's law can be expressed with the following equation:
\[P_\text{total} = P_1 + P_2 + P_3 + \cdots\]
The figure below shows two gases that are in separate, equal-sized containers at the same temperature and pressure. Each exerts a different pressure, \(P_1\) and \(P_2\), reflective of the number of particles in the container. On the right, the two gases are combined into the same container, with no volume change. The total pressure of the gas mixture is equal to the sum of the individual pressures. If \(P_1 = 300 \: \text{mm} \: \ce{Hg}\) and \(P_2 = 500 \: \text{mm} \: \ce{Hg}\), then \(P_\text{total} = 800 \: \text{mm} \: \ce{Hg}\).
Collecting Gases over Water
You need to do a lab experiment where hydrogen gas is generated. In order to calculate the yield of gas, you have to know the pressure inside the tube where the gas is collected. But how can you get a barometer in there? Very simple: you don't. All you need is the atmospheric pressure in the room. As the gas pushes out the water, it is pushing against the atmosphere, so the pressure inside is equal to the pressure outside.
Gas Collection by Water Displacement
Gases that are produced in laboratory experiments are often collected by a technique called
water displacement (Figure \(\PageIndex{2}\)). A bottle is filled with water and placed upside-down in a pan of water. The reaction flask is fitted with rubber tubing which is then fed under the bottle of water. As the gas is produced in the reaction flask, it exits through the rubber tubing and displaces the water in the bottle. When the bottle is full of the bas, it can be sealed with a lid.
Because the gas is collected over water, it is not pure but is mixed with vapor from the evaporation of the water. Dalton's law can be used to calculate the amount of the desired gas by subtracting the contribution of the water vapor.
\[\begin{array}{ll} P_\text{Total} = P_g + P_{H_2O} & P_g \: \text{is the pressure of the desired gas} \\ P_g = P_{Total} - P_{H_2O} & \end{array}\]
In order to solve a problem, it is necessary to know the vapor pressure of water at the temperature of the reaction (see table below). The sample problem illustrates the use of Dalton's law when a gas is collected over water.
0 5 10 15 20 25 30 35 40 45 50 55 60 4.58 6.54 9.21 12.79 17.54 23.76 31.82 42.18 55.32 71.88 92.51 118.04 149.38
Example 14.14.1
A certain experiment generates \(2.58 \: \text{L}\) of hydrogen gas, which is collected over water. The temperature is \(20^\text{o} \text{C}\) and the atmospheric pressure is \(98.60 \: \text{kPa}\). Find the volume that the dry hydrogen would occupy at STP.
Solution: Step 1: List the known quantities and plan the problem. Known \(V_\text{Total} = 2.58 \: \text{L}\) \(T = 20^\text{o} \text{C} = 293 \: \text{K}\) \(P_\text{Total} = 98.60 \: \text{kPa} = 739.7 \: \text{mm} \: \ce{Hg}\) Unknown \(V_{H_2}\) at STP \(= ? \: \text{L}\)
The atmospheric pressure is converted from \(\text{kPa}\) to \(\text{mm} \: \ce{Hg}\) in order to match units with the table. The sum of the pressures of the hydrogen and the water vapor is equal to the atmospheric pressure. The pressure of the hydrogen is found by subtraction. Then, the volume of the gas at STP can be calculated by using the combined gas law.
Step 2: Solve.
\[P_{H_2} = P_\text{Total} - P_{H_2O} = 739,7 \: \text{mm} \: \ce{Hg} - 17.54 \: \text{mm} \: \ce{Hg} = 722.2 \: \text{mm} \: \ce{Hg} \nonumber\]
Now the combined gas law is used, solving for \(V_2\), the volume of hydrogen at STP.
\[V_2 = \frac{P_1 \times V_1 \times T_2}{P_2 \times T_1} = \frac{722.2 \: \text{mm} \: \ce{Hg} \times 2.58 \: \text{L} \times 273 \: \text{K}}{760 \: \text{mm} \: \ce{Hg} \times 293 \: \text{K}} = 2.28 \: \text{L} \: \ce{H_2} \nonumber\]
Step 3: Think about your result.
If the hydrogen gas were to be collected at STP and without the presence of the water vapor, its volume would be \(2.28 \: \text{L}\). This is less than the actual collected volume because some of that is water vapor. The conversion using STP is useful for stoichiometry purposes.
Summary The total pressure in a system is equal to the sums of the partial pressures of the gases present. The vapor pressure due to water in a sample can be corrected for in order to get the true value for the pressure of the gas. |
Current browse context:
math-ph
Change to browse by: References & Citations Bookmark(what is this?) Mathematical Physics Title: Covariant Hamiltonian Field Theories on Manifolds with Boundary: Yang-Mills Theories
(Submitted on 1 Jun 2015 (v1), last revised 9 May 2016 (this version, v2))
Abstract: The multisymplectic formalism of field theories developed by many mathematicians over the last fifty years is extended in this work to deal with manifolds that have boundaries. In particular, we develop a multisymplectic framework for first order covariant Hamiltonian field theories on manifolds with boundaries. This work is a geometric fulfillment of Fock's characterization of field theories as it appears in recent work by Cattaneo, Mnev and Reshetikhin [Ca14]. This framework leads to a true geometric understanding of conventional choices for boundary conditions. For example, the boundary condition that the pull-back of the 1-form on the cotangent space of fields at the boundary vanish, i.e. \pi * \alpha = 0 , is shown to be a consequence of our finding that the boundary fields of the theory lie in the 0-level set of the moment map of the gauge group of the theory. It is also shown that the natural way to interpret Euler-Lagrange equations as an evolution system near the boundary is as a presymplectic system in an extended phase space containing the natural configuration and momenta fields at the boundary together with extra degrees of freedom corresponding to the transversal components at the boundary of the momenta fields of the theory. The consistency conditions at the boundary are analyzed and the reduced phase space of the system is determined to be a symplectic manifold with a distinguished isotropic submanifold corresponding to the boundary data of the solutions of Euler-Lagrange equations. This setting makes it possible to define well-posed boundary conditions, and provides the adequate setting for the canonical quantization of the system. The notions of the theory will be tested against three significant examples: scalar fields, Poisson\sigma-model and Yang-Mills theories. Submission historyFrom: Amelia Spivak [view email] [v1]Mon, 1 Jun 2015 03:31:36 GMT (564kb,D) [v2]Mon, 9 May 2016 16:49:54 GMT (541kb,D) |
Theorem 2
An nxn matrix A is diagonalizable if and only if A has n linearly independent eigenvectors
Proof
Let A be diagonalizable
Thus, $A = PDP^{-1}$ for some invertible matrix P and diagonal matrix D
$P = \begin{bmatrix} \vec{v}_1&\vec{v}_2&\vec{v}_3,...,\vec{v}_n \end{bmatrix}$ and $D = \begin{bmatrix} d_1&0&0&...&0\\0&d_2&0&...&0\\0&0&d_3&...&0\\0&0&0&...&0\\0&0&0&...&d_n \end{bmatrix}$
Where P is linearly independent since P is invertible
Since $A = PDP^{-1}$(1)
$A\vec{v}_1 = d_1\vec{v}_1$
$A\vec{v}_2 = d_2\vec{v}_2$
Therefore, $\begin{bmatrix} \vec{v}_1&\vec{v}_2&\vec{v}_3,...,\vec{v}_n \end{bmatrix}$ are eigenvectors
Since these are also the columns of P, we know they are linearly independent.
A has n linearly independent eigenvectors
let $\begin{bmatrix} \vec{v}_1&\vec{v}_2&\vec{v}_3,...,\vec{v}_n \end{bmatrix}$ be linearly independent eigenvectors let $\lambda_1, \lambda_2, \lambda_3...\lambda_n$ be the associated eigenvalues let $P = \begin{bmatrix} \vec{v}_1&\vec{v}_2&\vec{v}_3,...,\vec{v}_n \end{bmatrix}$
P is invertible since its components are linearly independent
D is the diagonal matrix with the eigenvalues as the entries on the primary diagonal
Then(4)
Since $\begin{bmatrix} \vec{v}_1&\vec{v}_2&\vec{v}_3,...,\vec{v}_n \end{bmatrix}$ are eigenvectors,
$A\vec{v}_1 = \lambda_1\vec{v}_1$
$A\vec{v}_2 = \lambda_2\vec{v}_2$
So it can be shown that $AP = PD$, and multiplying both sides on the right by $P^{-1}$, we find that $A = PDP^{-1}$ and is therefore diagonalizable. |
The problem of simulating a multi-path channel where the delays are not integer multiples of the sampling time is not trivial. The simplest method is just to round each tap to the nearest sample, however this is not what happens in reality:
Consider a discrete-time transmit signal $s[n]$ which is transmitted over a real channel. I.e. there needs to be some digital-analog-converter followed by a lowpass filter, to generate the continuous-time transmit signal:
$$s(t) = \sum_ns[n]g(t-nT)$$where T is the sampling period and $g(t)$ is the combination of DAC and subsequent low-pass filter. Note that $g(t)$ can be assumed to be RRC or Sinc-filter for example.Then, the multi-path channels comes in:
$$r(t) = \sum_k\alpha_k s(t-\tau_k)=\sum_ns[n]\sum_k\alpha_kg(t-nT-\tau_k)$$
Finally, the received signal is low-pass-filtered with $\gamma(t)$ and discretized:$$\begin{align}r[n'] &= (\gamma(t)*r(t))|_{t=n'T}\\&=\sum_ns[n]\sum_k\alpha_ku(n'T-nT-\tau_k)\end{align}$$where $u(t)=g(t)*\gamma(t)$. Now, we can write this as a discrete convolution:
$$r[n'] = \sum_ns[n]h[n'-n]$$with$$h[n'-n] = \sum_k\alpha_ku((n'-n)T-\tau_k).$$
Here, $h[n]$ is the discrete-time equivalent channel. Note that this channel is actually neither causal nor finite, since g(t) is bandlimited (and hence infinite in time).
You can understand $h[n]$ as a sampled version of the sum of continuous-time shifts of the overall impulse response without the channel u(t). Normally, u(t) should be a Nyquist-Filter, i.e. it is ISI-free. In this case, and when the channel taps are at integer fractions, you get a well-behaved (i.e. finite and causal) discrete channel.
This is how it works in reality. For simulation, you can generate this h[n] and truncate it, when the tap energy falls below a certain threshold. |
I should begin by apologizing for rambling on a bit. You've asked questions that are closely related to things I've been thinking about, but I don't know all the literature well, and I clearly haven't figured out how to say things concisely.
I will write $E_d$ for the operad (in spaces) of little $d$-disks, and $G_d$ ("$G$" for "Gerstenhaber") for what you're calling $\operatorname{Pois}^d$. (Because the word "$d$-Poisson algebra" appears at least in some papers by Cattaneo and collaborators, but at least the early ones get the sign wrong, and call by "$0$-Poisson" what should be called "$1$-Poisson".) Namely, $G_d$ is an operad in graded vector spaces generated by a binary "multiplication" in (homological) grading $0$ and a binary "bracket" in grading $d-1$ (and a zero-ary "unit" in grading $0$), such that the multiplication (and unit) is unital commutative, and such that the bracket is Lie up to shifting by $d-1$, and such that the Leibniz rule is satisfied. (In the correct sign conventions, saying that the bracket is "Lie after shifting" mean that it satisfies a Jacobi identity and is antisymmetric for odd $d$ and symmetric for even $d$.)
Begin with the operad $E_d$. The space of binary operations is a $d$-sphere. When $d\geq 1$, the $d$-sphere is nonempty, and its (integer!) homology is free on two generators, one in degree $0$ and the other in homological degree $d-1$. When $d=1$, please work in a setting where $2$ is invertible, and switch from the basis of points to the basis consisting of the average of the two points in the $0$-sphere and the difference. Let me call by "the bracket" any binary operation representing the degree-$(d-1)$ generator in homology (the fundamental class) and by "the multiplication" any binary operation representing the degree-$0$ class in homology. Let me also linearize by taking chains; so I will work with the operad of chain complexes $\mathrm C_\bullet(E_d)$, but write "$E_d$-algebra" for "$\mathrm C_\bullet E_d$-algebra".
Then the first observation is that the bracket in a fairly precise way measures the failure of the multiplication is be commutative. The second observation is that the bracket distributes over itself up to a contractible space, and is (anti)$^d$commutative, where "(anti)$^d$" means "anti" when $d$ is odd, and is empty when $d$ is even. So in fact if you throw away the multiplication and use only the bracket, there is a map to $\mathrm C_\bullet(E_d)$ from $\operatorname{Lie}[d-1]$, where by "$\operatorname{Lie}[d-1]$" I mean the Lie operad with the bracket shifted into homological degree $d-1$. And I mean that this map exists in some infinity sense. The point is that any $E_d$-algebra, if you remember only the action of the $d$-sphere, is a Lie algebra up to shifting by homological degree $(d-1)$, and up to replacing points with contractible spaces and all that. "Strongly homotopy" is a word that should now be bandied about.
Ok, so we can go a bit further. If you have a family of $E_d$-algebras over the (formal) disk $\operatorname{Spec}(k[\![\hbar]\!])$ that degenerates at the origin $\hbar=0$ to an $E_\infty$-algebra (recall that there are inclusions of operads $E_d \hookrightarrow E_{d+1}$, and the direct limit $E_\infty$ has a contracible space of $k$-ary operations for any $k\geq 0$, and so is "strongly homotopically" the commutative operad), then among other things you have a family of $\operatorname{Lie}[d-1]$-algebras that degenerates to the $0$-algebra at the origin. Then rescale the Lie bracket by $\hbar^{-1}$; now it need not vanish at $\hbar = 0$. The (homotopy-)associativity in the $E_d$-algebra implies a (homotopy-)Leibniz rule. So at the origin actually you have a (strongly homotopy) $G_d$-algebra.
This is all a long story to explain something short, which David Ben-Zvi has already alluded to: $G_d$ "is" the operad of "first order" $E_d$ algebras. Of course, I haven't at all shown that there aren't maybe further relations, or (which is the same thing!) further operations that survive (maybe after rescaling by $\hbar^{-1}$) at the limit. Essentially, the proof (that I know of) that there aren't any only applies in $d\geq 2$, and only in characteristic $0$: the celebrated fact that the homology $\mathrm H_\bullet(E_d)$ is on the nose $G_d$, as operad of chain complexes. I'm pretty sure, but don't know a reference, that this fails in non-zero characteristic. Point, is, when $d \geq 2$ it is clear that $\mathrm H_0(E_d)$ is (chains on) the commutative operad, because the other operations are all far away from $0$ in homological degree, and so there is a map $\mathrm H_\bullet(E_d) \leftarrow G_d$ in any characteristic when $d\geq 2$. It is somewhat remarkable that over $\mathbb Q$ this map is an iso; evidence that it is remarkable is that this fact comes after many wrong proofs.
Note that having a map $G_d \to \mathrm H_\bullet(E_d)$ is a far cry from having a map $G_d \to \mathrm C_\bullet(E_d)$. Sure, you can chose generators of homology and try to lift from $\mathrm H_\bullet$ to $\mathrm C_\bullet$, but there are spaces of ways to do this that are not contractible, and so you won't in general be able to make the lift be one of operads-up-to-strongly-homotopy. What I've tried to explain is a map from $G_d$ to the operad of "$E_d$-algebras over the formal disk that are $E_\infty$ at the point". (Recall: maps of operads go opposite to "forgetful" maps.)
Anyway, I'd like to end with a few comments on the too-celebrated "formality theorem". Formality is stronger than but implies the statement that every $G_d$-algebra is the first-order part of an $E_d$-algebra. Here's an example. Let me work with categories enriched over $\mathbb Q$. A
Casimir category is a symmetric monoidal category equipped with a natural isomorphism between $\otimes$ and $\otimes\circ \operatorname{flip}$ satisfying something like Jacobi and Leibniz. Then formality with $d=2$ implies that every Casimir category is the first-order part of a braided monoidal category.
When $d=2$, Tamarkin pointed out that formality is the same as existence of something called "Drinfeld associators". $d=2$ is the number I understand the best, because Drinfeld associators are a bit older than the general formality theorem (to which I don't believe there is a proof over $\mathbb Q$; at least, Lambrechts and Volic, in their excellent paper on Kontsevich's proof of formality, say that for the (correct) unital versions of the operads, the trick that gets from Kontsevich's proof, which is over the periods, to something over $\mathbb Q$, doesn't work). Proofs of formality, i.e. Drinfeld associators, are the points of a scheme (or something like one — maybe I want to work more homotopically, and maybe I want projective limits, and ...). By "proofs of formality" I mean more or less "morphisms of operads". Anyway, this scheme (at least for $d=2$) is non-empty over $\mathbb Q$ (and for all $d$ it is nonempty over the periods), but it is never "contractible", whatever that means for this type of scheme-like object. So you really are making a choice when you invoke formality, and some choices are better than others. For example, some are more amenable to explicit computations, some satisfy extra symmetries, etc.. And there is no getting around this, because the groups of automorphisms of $G_d$ or $E_d$ are not contractible, and the space of isomorphisms is, of course, a bitorsor for these two groups.
Ok, a conclusion. I'm pretty sure the answer to your last question is "yes". But I'm also pretty sure that it hasn't been written down, really. (I would love to be told otherwise!) At least, I'm not aware of a paper that explains well the situation when $d\geq 3$. When $d=2$, there is plenty, including all the work by Etingof and collaborators on associators and Lie bialgebra (bi)quantization; these papers tend to be long, but understandable on about the fourth read. Some of it is well-reviewed by Bar-Natan, who is an excellent writer. Tamarkin's papers are short, and I find hard to read, but have a lot in them. I do highly recommend the papers by Severa on this stuff, and in particular his write-up of Tamarkin's work. A last part of the $d=2$ story is related to words like "Alekseev Torossian" and "Kashiwara Verne", and I don't understand any of that part. |
Authors: Arnautov Vladimir Abstract
Let $R$ be an associative ring, $S$ is such an element from $R$ that $rs\ne 0$ and $sr\ne 0$ for any $0\ne r\in R$ and $s^kR\subseteq Rs$ for some natural number $k$; $S=\{s^i | i=1,2,\dots \}$. Pseudonorm $\zeta$ of ring $R$ we may extend to pseudonorm on rings quotients $R_s$ of ring $R$ on multiplicative system $S$ if and only if
$$
\inf \left \{ \frac{\zeta (rs)}{\zeta(r)}, \frac{\zeta (sr)}{\zeta (r)}
\bigm | 0\ne r\in R \right \} >0.
$$
The given example shows that the demand that $s^k R\subseteq Rs$ for some natural number $k$ can't be substituted for demand that in ring $R$ the right condition Ore is fulfilled concerning multiplicative system $S=\{s^i|i=1,2,\dots \}$. |
3-Uniform Friendship Hypergraphs by Derrick Stolee A very brief welcome to EXCILL2 participants. Thanks for visiting!
Today we discuss On a question of Sós about 3-uniform friendship hypergraphs by Hartke and Vandenbussche. These authors presented several new examples of friendship hypergraphs exhibiting a property that was not known to be possible. Their method applies careful use of integer programming as a black box. First, a careful construction of an integer linear program discovered some examples (and non-existence of examples) for small orders. Then, by making some symmetry assumptions, they constructed some larger integer programs that found larger examples, hinting towards an infinite family. The existence of larger examples remains open, although Lia, van Reesa, Seoa, and Singhi recently proved some structure theorems that apply to the examples found by Hartke and Vandenbussche.
Friendship Graphs and Hypergraphs
A
friendship graph is a graph where every pair of vertices has a unique common neighbor (or “friend in common”). The Friendship Theorem of Erdős, Rényi, and Só completely characterized these graphs to be a very simple family: a set of triangles all sharing exactly one vertex. This one vertex in common then forms a universal friend. (These graphs are uniquely -saturated.) This completes the discussion on friendship graphs, but there are several ways to generalize the concept to hypergraphs.
We use a generalization due to Sós.
Definition. A friendship 3-hypergraph is a 3-uniform hypergraph where for every set of three vertices there exists a unique fourth vertex (called the completion of ) such that the hypergraph contains the three edges , , and .
Sós built friendship 3-hypergraphs that contain a universal friend (a vertex such that for all pairs the edge is in the hypergraph) when or . The question remained: do friendship 3-hypergraphs require this universal friend?
Overview of Method
I want to take a moment to summarize the thought process that likely went through the authors’ minds when they were performing this research. This process is mostly a projection of my own process into their work, but it is probably a good one. They started with the question, and followed these steps:
Prove simple structural properties of the objects, if they exist. Formulate a computational method based on those properties. Execute the method and consider the results. Based on properties of small examples, guess properties of larger examples. Use those stronger properties to form a new computational method. Execute and report the results (with success!).
Ok, let’s get back to the research.
Properties of a Friendship 3-Hypergraph
The authors found the following properties that must be satisfied by any friendship 3-hypergraph:
Every pair of vertices appears in an at least one edge. Every edge is contained within a unique copy of , the complete 3-hypergraph of order 4. There are at least edges. There are at least copies of .
In our ILP formulation, we focus on the copies of . It may seem odd to use variables when we could use for the edges, but the structure of these complete subgraphs make the constraints much simpler and significantly restricts the structure of the subhypergraphs that are built as subsolutions to the full program. This is even more evident when we include indicator variables for the completion relationships.
The Integer Program Formulation
We now use these structural properties to build an integer linear program. Let our vertex set be and for all with , let be the binary indicator variable for whether the subgraph induced by is a copy of . We also include for every set with and vertex the variable , which has value 1 if and only if is the completion of
and is a nonedge. Thus, every edge has exactly one set with and every nonedge has exactly one vertex with . We encode this as the following constraint:
(F) for all .
We now need to connect these variables in such a way that when for , the three edges , , and are present in the hypergraph and thus are present in some copy of . We add three new 0/1 variables for each pair . Always let .
(B1) .
(B2) . (B3) .
It remains to encode that if and only if each , , and have value 1.
(L) .
(U1) . (U2) . (U#) .
Since this integer program is extremely symmetric, a solver would spend a lot of time revisiting isomorphic copies of subsolutions already visited, so the authors added a trivial constraint by setting . This broke the symmetry enough that CPLEX was able to complete its search when .
Theorem (Hartke, Vandenbussche). There do not exist friendship 3-hypergraphs on 6, 7, or 9 vertices, and all friendship 3-hypergraphs on 10 vertices have a universal friend.
Also, when the integer program is run with , Sós’ construction could be returned as an answer, so the authors added one final constraint. Observe that a universal friend would have whenever . If , then there are three ‘s that are forced by this, so we only look at the
-degree of a vertex to forbid a universal friend. A universal friend would have incident edges, and hence would be contained in copies of .
(D) $\sum_{A \owns v} x_A \leq \frac{1}{3}\binom{n-1}{2} – 1$ for all .
With this constraint and applying some optimization tricks, the authors found the following theorem.
Theorem (Hartke, Vandenbussche). There exists a unique friendship 3-hypergraph of order 8 with no universal friend.
The friendship 3-hypergraph has an algebraic representation, but I prefer the following geometric one. Let be the vertices of the 3-dimensional cube. The copies of within correspond to the planes that hit four vertices in the cube. See the figure below for an attempt at diagramming this interpretation.
Building Larger Instances from Smaller Instances
After finding an instance on 8 vertices, the authors though to wisely notice that 8 is a power of 2 and try doubling the number of vertices. While the program is intractable to solve from scratch, they made the following assumptions that a solution could satisfy:
( Inductive Property) A solution on vertices contains two vertex-disjoint copies of a solution on vertices. Call these vertex sets the parts. ( Pair Partition Property) The two parts partition into pairs such that each pair in one part forms a with every pair in the other part. ( Automorphism Property) If we consider the vertices to be the elements of the group $G = latex ({\mathbb Z})^k$, then the group acts on the hypergraph by addition. That is, if is an element and is a vertex, the automorphism associated with has .
The Automorphism Property could perhaps be seen as a “lucky guess” but is still a good idea, especially because satisfies this property. This fits in with what I call “Symmetry as Constraint” in order to drastically reduce the branching factor of a search. If certain variables are set to be equal by some group action, then instead of having a search space of size (where is the number of variables) we have a search space of size roughly (where is the average size of a variable orbit). This constant reduction in the exponent corresponds to a polynomial reduction in the base (i.e. ). This is a
huge reduction.
With these constraints, the authors found two new instances on 16 vertices and one on 32 vertices. Relaxing the pair partition property they found a third instance on 16 vertices. These constructions are detailed in the paper.
It remains open whether or not such constructions exist for other powers of two. The exponential growth in makes this already difficult problem even more challenging to tackle. So, either you can wait until computers are fast enough to handle these cases or get to work finding more constraints and better algorithms. Perhaps some sort of structure could be inferred from these examples?
References
Stephen G. Hartke, Jenniver Vendenbussche, On a question of Sós about 3-uniform friendship hypergraphs,
Journal of Combinatorial Designs, 16(3), pages 253–261 (2008).
P.C. Lia, G.H.J. van Reesa, Stela H. Seoa, N.M. Singhi, Friendship 3-hypergraphs,
Discrete Mathematics, 312(11), pages 1892–1899 (2012).
V. T. Sós, Remarks on the connection of graph theory, finite geometry and block designs.
Colloquio Internazionale sulle Teorie Combinatorie 2, pages 223-233 (1976). |
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ... |
The following key enhancement to DES was proposed in order to increase the complexity of finding the keys by exhaustive search.
$$\text{DES}^V_{k,k_1}(M)=\text{DES}_k(M)\oplus k_1$$
where the keys’ lengths are $|k|=56$ and $|k_1|=64$ ($k_1$ has the same length as the block length). Show that this proposal do not increase the complexity of breaking the crypto-system using brute-force key search. That is, show how to break these schemes using on the order of $2^{56}\ \text{DES}$ encryptions/decryptions.
You may assume that you have a
moderate number ofplaintext-ciphertext pairs, $C_i=\text{DES}^V_{k,k_1}(M_i)$.
Let $r$ be the number of pairs indexed from $1$ to $r$.
I define $\forall\ l\in\{0,1\}^{56},\ i\in\{1,\dots,r\}:\text{K}(l,i) = \text{DES}_l(M_i)\oplus C_i$.
My proposal for an adversary is:
for every l of size 56: for i = 1,...,n: compute K(l, i) if for all i != j : K(l,i) == K(l,j) return <l,K(l,1)> as the keys of the cryptosystem
It is true that if in some iteration $l=k$ before returning, all the $\text{K}(l,i)$ will be equal to $k_1$ (from the definition of the new crypto-system), so we will return $k,k_1$ as the keys.
But, I think that it is possible that for some $l\neq k$ the $\text{if}$ in the second $\text{for}$ loop will be satisfied since we are checking only a small amount of messages. In such a case if we meet $l$ before meeting $k$ the return of the algorithm will be false.
question
Is it possible to break the crypto-system deterministically with the given complexity, or we must allow some error probability? |
This article is all about the basics of probability. There are two interpretations of a probability, but the difference only matters when we will consider inference.
Frequency The degree of belief Axioms of Probability
A function \(P\) which assigns a value \(P(A)\) to every event \(A\) is a
probability measure or probability distribution if it satisfies the following three axioms. \(P(A) \geq 0 \text{ } \forall \text{ } A\) \(P(\Omega) = 1\) If \(A_1, A_2, …\) are disjoint then \(P(\bigcup_{i=1}^{\infty} A_i) = \sum_{i=1}^{\infty} P(A_i) \)
These axioms give rise to the following five properties.
\(P(\emptyset) = 0\) \(A \subset B \Rightarrow P(A) \leq P(B)\) \(0 \leq P(A) \leq 1\) \(P(A^\mathsf{c}) = 1 – P(A)\) \(A \cap B = \emptyset \Rightarrow P(A \cup B) = P(A) + P(B)\) The Sample Space The sample space, , is the set of all possible outcomes, . Subsets of are events. The empty set contains no elements. Example – Tossing a coin
Toss a coin once:
Toss a coin twice:
Then event that the first toss is heads:
Set Operations – Complement, Union and Intersection Complement
Given an event, , the
complement of is , where: Union
The
union of two sets A and B, is set of the events which are in either A, or in B or in both. Intersection
The
intersection of two sets A and B, is set of the events which are in both A and B. Difference Set
The
difference set is the events in one set which are not in the other: Subsets
If every element of A is contained in B then A is a
subset of B: or equivalently, . Counting elements
If A is a finite set, then denotes the number of elements in A.
Indicator function An indicator function can be defined: Disjoint events
Two events A and B are
disjoint or mutually exclusive if (the empty set) – i.e. there are no events in both A and B). More generally, are disjoint if whenever . Example – intervals of the real line
The intervals are disjoint.
The intervals are not disjoint. For example, . Partitions
A
partition of the sample space is a set of disjoint events such that . Monotone increasing and monotone decreasing sequences
A sequence of events, is
monotone increasing if . Here we define and write .
Similarly, a sequence of events, is
monotone decreasing if . Here we define . Again we write |
Out of the following real numbers, which is the coolest? e (2.7182818...) (38%, 6 Votes) Zero (0) (25%, 4 Votes) pi (3.14159....) (25%, 4 Votes) One (1) (13%, 2 Votes) Golden Ratio (1.61803...) (0%, 0 Votes)
Total Voters:
16
Loading ...
Some definitions:
pi, also written $\pi$ is defined as the ratio of the circumference to the diameter in a circle e, is the natural logarithm constant. Defined as $f(1)$ where $f$ is the unique differentiable function $f:\R\to\R$ satisfying $f(0) = 1$ and $f'(x) =x$ for all $x\in \R$. The golden ratio is defined as $(1+\sqrt{5})/2$ |
Search
Now showing items 1-10 of 165
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... |
September 16th, 2017, 07:05 PM
# 1
Newbie
Joined: Sep 2017
From: San Diego
Posts: 8
Thanks: 0
If P>= 5 is a prime number, then p^2 +2 is composite.
I have seen outlines of the proof, and I tried to fill in the reasons why. Are they right?
If p>=5 is a prime number, then p^2 +2 is composite.
Proof: Suppose p>=5 is a prime number. Then by the quotient remainder theorem, p can be expressed as
6m or 6m + 1 or 6m + 3 or 6m + 4 or 6m + 5 for some integer m.
However, since p is a prime number greater than or equal to 5, it cannot be expressed as a multiple of 2 or 3 greater than 5, because p would then be not prime. Thus p can only be expressed as
6m+1 or 6m+5.
If p = 6m + 1, then by squaring p and adding 2, we get
p^2 + 2 = (6m+1)^2 + 2 = 3(12m^2 + 4m +1)
Thus p^2 + 2 is divisible by a number less than p^2 + 2 and greater than 1, so, p^2 + 2 is composite.
If p = 6m + 5, then by similar reasoning we get
p^2 + 2 = (6m+5)^2 + 2 = 3(12m^2 + 20m +7)
Thus p^2 + 2 is divisible by a number less than p^2 + 2 and greater than 1, so, p^2 + 2 is composite. Hence, in either case p^2 + 2 is composite, which is what we needed to show.
September 16th, 2017, 10:36 PM
# 2
Math Team
Joined: Dec 2013
From: Colombia
Posts: 7,685
Thanks: 2665
Math Focus: Mainly analysis and algebra
Yes: $p = 6m \pm 1$ and so $p \equiv \pm 1 \pmod{6}$ and $p^2 \equiv 1 \pmod{6}$ and $p^2 + 2\equiv 3 \pmod{6}$ and thus $p^2+2 = 6k+3$.
Last edited by v8archie; September 16th, 2017 at 10:39 PM.
Tags composite, number, p>, p>, prime
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Composite numbers prime-like mobel Number Theory 24 October 30th, 2014 09:18 AM Composite number agustin975 Number Theory 8 March 8th, 2013 09:04 AM probability that a number is composite ershi Number Theory 113 August 18th, 2012 04:45 AM Integers, Prime, and Composite Numbers Mighty Mouse Jr Algebra 6 May 11th, 2010 09:08 AM Serial position of a composite number Geir Number Theory 1 July 11th, 2009 04:57 AM |
I know I promised a post on regression, but then I realized I only have a shallow understanding of Boosting and AdaBoost. So I biked to the nearest public library, when to the index cards, search for ‘Boost’ and after perusing through hundreds of self-help books, I found the greatest resource on AdaBoost: “How to Boost Your Spirit by Ada MacNally”. Nah, kidding. Such book doesn’t exist. The following, as always, is my study notes, taken from
Foundations of Machine Learning by MIT Press and Machine Learning in Action by Peter Harrington published by Manning. The first book is heavy on math. The second book is more of a fluff book and is much simpler than the first one.
It is often difficult, for a non-trivial learning task, to directly devise an accurate algorithm satisfying the strong PAC-learning requirements that we saw in the PAC-learnable algorithm post I wrote before (link) but, there can be more hope for finding simple predictors guaranteed only to perform slightly better than random. The following gives a formal definition of such
weak learners. As in the PAC-learning post, we let $n$ be a number such that the computational cost of representing any element $x \in \mathcal{X}$ is at most $O(n)$ and denote by size(c) the maximal cost of computational representation of $c \in \mathcal{C}$.
Let’s define
weak learning. A concept class $\mathcal{C}$ is said to be weakly PAC-learnable if there exists an algorithm $\mathcal{A}, \gamma > 0$ and a polynomial function $poly(., . , .)$ such that for any $\delta > 0$, for all distributions $\mathcal{D}$ on $\mathcal{X}$ and for any target concept $c \in \mathcal{C}$, the following holds for any sample size $m \geq poly(1/\delta,n,size(c))$:
\[ \underset{\mathcal{S} \sim \mathcal{D}^m}{\mathbb{P}}\left[R(h_s) \leq \frac{1}{2} – \gamma \right] \leq 1 – \delta, \]
where $h_s$ is the hypothesis returned by algorithm $\mathcal{A}$ when trained on sample $S$. When such an algorithm $\mathcal{A}$ exists, it is called a
weak learning algorithm for $\mathcal{C}$ or a weak learner. The hypothesis returned by a weak learning algorithm is called a base classifier.
The key idea behind boosting techniques is to use a weak learning algorithm to build a
strong learner. That is, an accurate PAC-learning algorithm. To do so, boosting techniques use an ensemble method: they combine different base classifiers returned by a weak learner by a weak learner to create more accurate predictors. But which base classifier should be used and how should they be combined?
Another visualization shall be:
The algorithm takes as input a labeled sample $S=((x_1, y_1),\ldots,(x_m, y_m))$, with $(x_i, y_i) \in \mathcal{X} \times \{-1, +1\}$ for all $i \in [m]$, and maintains a distribution over the indices $\{-1, \ldots, m\}$. Initially, lines 1-2, the distribution uniform ($\mathcal{D}_1$). At each round of boosting, that is each iteration $t \in [T]$, of the loop in lines 3-8, a new base classifier $ h_t \in \mathcal{H} $ is selected that minimizes the error on the training sample weighted by the distribution $ \mathcal{D}_t$:
\[ h_t \in \underset{h \in \mathcal{H}}{argmin}\underset{i \sim \mathcal{D}_t}{\mathbb{P}} [h(x_i) \neq y_i] = \underset{h \in \mathcal{H}}{argmin}\sum_{i=1}^{m}\mathcal{D}_t(i)1_{h(x_i) \neq y_i} .\]
$Z_t$ is simply a normalization factor to ensure that the weights $D_{t+1}(i)$ sum to one. The precise reason for the definition of the coefficient at $\alpha_t$ will become clear later. For now, observe that if $\epsilon_t$, the error of the base classifier, is less than $\frac{1}{2}$, then $\frac{1-\epsilon_t}{\epsilon_t} > 1$ and $\alpha_t$ is positive. Thus, the new distribution $\mathcal{D}_{t+1}$ is defined from $\mathcal{D}_t$ by substantially increasing the weight on $i$ if point $x_i$ is incorrectly classified, and, on the contrary, decreasing it if $x_i$ is correctly classified. This has the effect of focusing more on the points incorrectly classified at the next round of boosting, less on those correctly classified by $h_t$.
After $T$ rounds of boosting, the classifier returned by AdaBoost is based on the sing of function $f$, which is a non-negative linear combination of the base classifiers $h_t$. The weight $\alpha_t$ assigned to $h_t$in that um is a logarithmic function of the ratio of the accuracy $1-\epsilon_t$ and error $\epsilon_t$ of $h_t$. Thus, more accurate base classifiers are assigned a larger weight in that sum.
For any $t \in [T]$, we will denote by $f_t$ the linear combination of the base classifiers after $t$ rounds of boosting: $f_t = \sum_{s=1}^{t}\alpha_sh_s$. In particular, we have,e $f_T = f$. The distribution $\mathcal{D}_{t+1}$ can be expressed in terms of $f_t$ and the normalization factors $Z_s$, $s \in [t]$ as follows:
\[ \forall i \in [m], \mathcal{D}_{t+1}(i)=\frac{e^{y_if_t(x_i)}}{ m \prod_{s=1}^{t}Z_s} .\]
The AdaBoost algorithm can be generalized in several ways:
– Instead of a hypothesis with minimal weighted error, $h_t$ can be more generally the base classifier returned by a weak learner algorithm trained on $D_t$;
– The range of the base classifiers could be $[-1, +1]$, or more generally bounded subset of $\mathbb{R}$. The coefficients $\alpha_t$ can then be different and may not even admit a closed form. In general, they are chosen to minimize an upper bound on the empirical error, as discussed in the next section. Of course, in that general case the hypotheses $h_t$ are not binary classifiers, but their sign could define the label and their magnitude could be interpreted as a measure of confidence.
AdaBoost was originally designed to address the theoretical question of whether a weak learning algorithm could be be used to derive a strong learning one. Here, we will show that it coincides in fact with a very simple algorithm, which consists of applying a general coordinate descent technique to a convex and differentiable objective function.
For simplicity, in this section, we assume that the base classifier set $\mathcal{H}$ is finite, with cardinality $N: \mathcal{H} = /{h_1, \ldots, h_N/}$. An ensemble function $f$ such as the one returned by AdaBoost can then be written as $f = \sum_{j=1}^N\bar{\alpha}_j h_j$. Given a labeled sample $S = ((x_1, y_1), \ldots, (x_m, y_m))$ let $F$ be the objective function defined for all $\boldsymbol{\bar{\alpha}} = (\bar{alpha}_1, \ldots, \bar{\alpha}_N) \in \mathbb{R}^N$ by:
\[ F(\boldsymbol{\bar{\alpha}}) = \frac{1}{m}\sum_{i=1}^{m}e^{y_i f(x_i)} = \frac{1}{m}\sum_{i=1}^m e^{y_i \sum_{j=1}{N}\bar{\alpha}h_j(x_i)}. \]
$F$ is a convex function of $\boldsymbol{\bar{\alpha}}$ since it is a sum of convex functions, each obtained by composition of the (convex) exponential function with an affine function of $\boldsymbol{\bar{\alpha}}$.
The AdaBoost algorithm takes the input dataset, the class labels, and the number of iterations. This is the only parameter you need to specify in most ML libraries.
Here, we briefly describe the standard practical use of AdaBoost. An important requirement for the algorithm is the choice of the base classifiers or that of the weak learner. The family of base classifiers typically used with AdaBoost in practice is that of
decision trees, which are equivalent to hierarchical partitions of the space. Among decision trees, those of depth one, also known as stumps, are by far the most frequently used base classifiers.
Boosting stumps are threshold functons associated to a single feature. Thus, a stump corresponds to a single axis-aligned partition of space. If the data is in $\mathbb{R}^N$, we can associate a stump to each ot he N components. Thus, to determine the stump with the minimal weighted error at each round of boosting, the best component and the best threshold for each component must be computed.
Now, let’s create a weak learner with a
decision stump, and then implement a different AdaBoost algorithm using it. If you’re familiar with decision trees, that’s OK, you’ll understand this part. However, if you’re not, either learn about them, or wait until I cover them tomorrow. A decision tree with only one split is a decision stump.
The pseudocode to generate a simple decision stump looks like this:
Set the minError to +∞ For every feature in dataset: For every step: For each inequality: Build a decision stump and test it with the weighted dataset If the error is less than minError: Set this stump as the best stump Return the best stump
Now that we have generated a decision stump, let’s train it:
For each iteration: Find the best stump using buildStump() Add the best stump to the stump array Calculate α Calculate the new eight vector Update the aggregate class estimate If the error rate is 0: Break out of the loop
Well, that is it for today! I know I keep promising one thing, and deliver another, but this time I’m
really going to talk about decision trees!
Semper Fudge! |
Nearest Neighbour Distance Distribution Function of a Three-Dimensional Point Pattern
Estimates the nearest-neighbour distance distribution function \(G_3(r)\) from a three-dimensional point pattern.
Usage
G3est(X, ..., rmax = NULL, nrval = 128, correction = c("rs", "km", "Hanisch"))
Arguments X
Three-dimensional point pattern (object of class
"pp3").
…
Ignored.
rmax
Optional. Maximum value of argument \(r\) for which \(G_3(r)\) will be estimated.
nrval
Optional. Number of values of \(r\) for which \(G_3(r)\) will be estimated. A large value of
nrvalis required to avoid discretisation effects.
correction
Optional. Character vector specifying the edge correction(s) to be applied. See Details.
Details
For a stationary point process \(\Phi\) in three-dimensional space, the nearest-neighbour function is $$ G_3(r) = P(d^\ast(x,\Phi) \le r \mid x \in \Phi) $$ the cumulative distribution function of the distance \(d^\ast(x,\Phi)\) from a typical point \(x\) in \(\Phi\) to its nearest neighbour, i.e. to the nearest
other point of \(\Phi\).
The three-dimensional point pattern
X is assumed to be a partial realisation of a stationary point process \(\Phi\). The nearest neighbour function of \(\Phi\) can then be estimated using techniques described in the References. For each data point, the distance to the nearest neighbour is computed. The empirical cumulative distribution function of these values, with appropriate edge corrections, is the estimate of \(G_3(r)\).
The available edge corrections are:
"rs":
the reduced sample (aka minus sampling, border correction) estimator (Baddeley et al, 1993)
"km":
the three-dimensional version of the Kaplan-Meier estimator (Baddeley and Gill, 1997)
"Hanisch":
the three-dimensional generalisation of the Hanisch estimator (Hanisch, 1984).
Alternatively
correction="all" selects all options.
Value
A function value table (object of class
"fv") that can be plotted, printed or coerced to a data frame containing the function values.
Warnings
A large value of
nrval is required in order to avoid discretisation effects (due to the use of histograms in the calculation).
References
Baddeley, A.J, Moyeed, R.A., Howard, C.V. and Boyde, A. (1993) Analysis of a three-dimensional point pattern with replication.
Applied Statistics 42, 641--668.
Baddeley, A.J. and Gill, R.D. (1997) Kaplan-Meier estimators of interpoint distance distributions for spatial point processes.
Annals of Statistics 25, 263--292.
Hanisch, K.-H. (1984) Some remarks on estimators of the distribution function of nearest neighbour distance in stationary spatial point patterns.
Mathematische Operationsforschung und Statistik, series Statistics 15, 409--412. See Also Aliases G3est Examples
# NOT RUN { X <- rpoispp3(42) Z <- G3est(X) if(interactive()) plot(Z)# }
Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2) |
Idempotent Matrices are Diagonalizable Problem 429
Let $A$ be an $n\times n$ idempotent matrix, that is, $A^2=A$. Then prove that $A$ is diagonalizable.
The second proof proves the direct sum expression as in proof 1 but we use a linear transformation.
The third proof discusses the minimal polynomial of $A$.
Range=Image, Null space=Kernel
In the following proofs, we use the terminologies
range and null space of a linear transformation. These are also called image and kernel of a linear transformation, respectively. Proof 1.
Recall that only possible eigenvalues of an idempotent matrix are $0$ or $1$.
(For a proof, see the post “Idempotent matrix and its eigenvalues“.)
Let
\[E_0=\{\mathbf{x}\in \R^n \mid A\mathbf{x}=\mathbf{0}\} \text{ and } E_{1}\{\mathbf{x}\in \R^n \mid A\mathbf{x}=\mathbf{x}\}\] be subspaces of $\R^n$. (Thus, if $0$ and $1$ are eigenvalues, then $E_0$ and $E_1$ are eigenspaces.)
Let $r$ be the rank of $A$. Then by the rank-nullity theorem, the nullity of $A$
\[\dim(E_0)=n-r.\]
The rank of $A$ is the dimension of the range
\[\calR(A)=\{\mathbf{y} \in \R^n \mid \mathbf{y}=A\mathbf{x} \text{ for some } \mathbf{x}\in \R^n\}.\]
Let $\mathbf{y}_1, \dots, \mathbf{y}_r$ be basis vectors of $\calR(A)$.
Then there exists $\mathbf{x}_i\in \R^n$ such that \[\mathbf{y}_i=A\mathbf{x}_i,\] for $i=1, \dots, r$.
Then we have
\begin{align*} A\mathbf{y}_i&=A^2\mathbf{x}_i\\ &=A\mathbf{x}_i && \text{since $A$ is idempotent}\\ &=\mathbf{y}_i. \end{align*}
It follows that $y_i\in E_1$.
Since $y_i, i=1,\dots, r$ form a basis of $\calR(A)$, they are linearly independent and thus we have \[r\leq \dim(E_1).\]
We have
\begin{align*} &n=\dim(\R^n)\\ &\geq \dim(E_0)+\dim(E_1) && \text{since } E_0\cap E_1=\{\mathbf{0}\}\\ &\geq (n-r)+r=n. \end{align*}
So in fact all inequalities are equalities, and hence
\[\dim(\R^n)=\dim(E_0)+\dim(E_1).\]
This implies
\[\R^n=E_0 \oplus E_1.\] Thus $\R^n$ is a direct sum of eigenspaces of $A$, and hence $A$ is diagonalizable. Proof 2.
Let $E_0$ and $E_1$ be as in proof 1.
Consider the linear transformation $T:\R^n \to \R^n$ represented by the idempotent matrix $A$, that is, $T(\mathbf{x})=A\mathbf{x}$.
Then the null space $\calN(T)$ of the linear transformation $T$ is $E_0$ by definition.
We claim that the range $\calR(T)$ is $E_1$.
If $\mathbf{x}\in \calR(T)$, then we have $\mathbf{y}\in \R^n$ such that $\mathbf{x}=T(\mathbf{y})=A\mathbf{y}$.
Then we have
\begin{align*} \mathbf{x}&=A\mathbf{y}=A^2\mathbf{y} =A(A\mathbf{y}) =A\mathbf{x}. \end{align*} (The second equality follows since $A$ is idempotent.)
This implies that $\mathbf{x}\in E_1$, and hence $\calR(T) \subset E_1$.
On the other hand, if $\mathbf{x}\in E_1$, then we have
\[\mathbf{x}=A\mathbf{x}=T(\mathbf{x})\in \calR(T).\] Thus, we have $E_1\subset \calR(T)$. Putting these two inclusions together gives $E_1=\calR(T)$.
By the isomorphism theorem of vector spaces, we have
\[\R^n=\calN(A)\oplus \calR(T)=E_0\oplus E_1.\] Thus, $\R^n$ is a direct sum of eigenspaces of $A$ and hence $A$ is diagonalizable. Proof 3.
Since $A$ is idempotent we have $A^2=A$.
Thus we have $A^2-A=O$, the zero matrix, and so $A$ satisfies the polynomial $x^2-x$.
If $x^2-x=x(x-1)$ is not the minimal polynomial of $A$, then $A$ must be either the identity matrix or the zero matrix.
Since these matrices are diagonalizable (as they are already diagonal matrices), we consider the case when $x^2-x$ is the minimal polynomial of $A$.
Since the minimal polynomial has two distinct simple roots $0, 1$, the matrix $A$ is diagonalizable.
Another Proof
A slightly different proof is given in the post “Idempotent (Projective) Matrices are Diagonalizable“.
The proof there is a variation of Proof 2.
Add to solve later |
First of all, it is clear that $\Z[\sqrt{2}]$ is an integral domain since it is contained in $\R$.
We use the norm given by the absolute value of field norm.Namely, for each element $a+\sqrt{2}b\in \Z[\sqrt{2}]$, define\[N(a+\sqrt{2}b)=|a^2-2b^2|.\]Then the map $N:\Z[\sqrt{2}] \to \Z_{\geq 0}$ is a norm on $\Z[\sqrt{2}]$.Also, it is multiplicative:\[N(xy)=N(x)N(y).\]Remark that since this norm comes from the field norm of $\Q(\sqrt{2})$, the multiplicativity of $N$ holds for $x, y \in \Q(\sqrt{2})$ as well.
We show the existence of a Division Algorithm as follows.Let\[x=a+b\sqrt{2} \text{ and } y=c+d\sqrt{2}\]be arbitrary elements in $\Z[\sqrt{2}]$, where $a,b,c,d\in \Z$.
We have\begin{align*}\frac{x}{y}=\frac{a+b\sqrt{2}}{c+d\sqrt{2}}=\frac{(ac-2bd)+(bc-ad)\sqrt{2}}{c^2-2d^2}=r+s\sqrt{2},\end{align*}where we put\[r=\frac{ac-2bd}{c^2-2d^2} \text{ and } s=\frac{bc-ad}{c^2-2d^2}.\]
Let $n$ be an integer closest to the rational number $r$ and let $m$ be an integer closest to the rational number $s$, so that\[|r-n| \leq \frac{1}{2} \text{ and } |s-m| \leq \frac{1}{2}.\]
Let\[t:=r-n+(s-m)\sqrt{2}.\]
Then we have\begin{align*}t&=r+s\sqrt{2}-(n+m\sqrt{2})\\&=\frac{x}{y}-(n+m\sqrt{2}).\end{align*}
It follows that\begin{align*}yt=x-(n+m\sqrt{2})y \in \Z[\sqrt{2}].\end{align*}
Thus we have\begin{align*}x=(n+m\sqrt{2})y+yt \tag{*}\end{align*}with $n+m\sqrt{2}, yt\in \Z[\sqrt{2}]$.
We have\begin{align*}N(t)&= |(r-n)^2-2(s-m)^2|\\&\leq |r-n|^2+2|s-m|^2\\& \leq \frac{1}{4}+2\cdot\frac{1}{4}=\frac{3}{4}.\end{align*}
It follows from the multiplicativity of the norm $N$ that\begin{align*}N(yt)=N(y)N(t)\leq \frac{3}{4}N(y)< N(y).\end{align*}Thus the expression (*) gives a Division Algorithm with quotient $n+m\sqrt{2}$ and remainder $yt$.
Related Question.
Problem. In the ring $\Z[\sqrt{2}]$, prove that $5$ is a prime element but $7$ is not a prime element.
Ring of Gaussian Integers and Determine its Unit ElementsDenote by $i$ the square root of $-1$.Let\[R=\Z[i]=\{a+ib \mid a, b \in \Z \}\]be the ring of Gaussian integers.We define the norm $N:\Z[i] \to \Z$ by sending $\alpha=a+ib$ to\[N(\alpha)=\alpha \bar{\alpha}=a^2+b^2.\]Here $\bar{\alpha}$ is the complex conjugate of […]
5 is Prime But 7 is Not Prime in the Ring $\Z[\sqrt{2}]$In the ring\[\Z[\sqrt{2}]=\{a+\sqrt{2}b \mid a, b \in \Z\},\]show that $5$ is a prime element but $7$ is not a prime element.Hint.An element $p$ in a ring $R$ is prime if $p$ is non zero, non unit element and whenever $p$ divide $ab$ for $a, b \in R$, then $p$ […]
Characteristic of an Integral Domain is 0 or a Prime NumberLet $R$ be a commutative ring with $1$. Show that if $R$ is an integral domain, then the characteristic of $R$ is either $0$ or a prime number $p$.Definition of the characteristic of a ring.The characteristic of a commutative ring $R$ with $1$ is defined as […] |
Let \( A \) be any real square matrix (not necessarily symmetric). Prove that: $$ (x'A x)^2 \leq (x'A A'x)(x'x) $$
The key point in proving this inequality is to recognize that \( x'A A'x \) can be expressed as vector norm of \( A'x \).
Proof:
If \( x=0 \), then the inequality is trival.
Suppose \( x \neq 0 \).
\( \frac{x'A x}{x'x}
= \frac{(A'x)'x}{\| x \|^2} = (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} \)
Because \( \frac{x}{\| x \|} \) is a unit vector, \( A'\frac{x}{\| x \|} \) can be considered as scale and rotation of \( \frac{x}{\| x \|} \) by \( A' \). Thus, the resulting vector norm of \( A'\frac{x}{\| x \|} \) is \( \alpha \) for some \( \alpha > 0 \). And \( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|}=\alpha \, cos(\beta) \) for some \( -\pi \leq \beta \leq \pi \), which is the angle between before and after premulitplying \( A' \).
Now:
\( ( \frac{x'A x}{x'x} )^2 \)
\(= ( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} )^2 \)
\( =\alpha^2 \, cos(\beta)^2 \)
\( \leq \alpha^2 \)
\(= (A'\frac{x}{\| x \|})'A'\frac{x}{\| x \|} \)
\(= \frac{(A'x)'A'x}{\| x \|^2} \)
\(= \frac{x'A A'x}{x'x} \)
Finally, multiplying both sides by \( (x'x)^2 \) completes the proof. |
We have a, b ∈ R \ Q, a < b.A = {x ∈ Q: a < x < b}
Show that A is clopen(open and close) in Q
i have being trying to solve this problem in the last 2hours
i need some hints or examples
thank you in advance
Assuming you give $\displaystyle \mathbb{Q}$ the topology inherited from $\displaystyle \mathbb{R}$ then open sets in $\displaystyle \mathbb{Q}$ are those of the form $\displaystyle A \cap \mathbb{Q}$ where $\displaystyle A$ is open in $\displaystyle \mathbb{R}$. Is $\displaystyle A_1=\{ x \in \mathbb{R} : a<x<b \}$ open? $\displaystyle A_1$ fails to be closed in $\displaystyle \mathbb{R}$ because it lacks $\displaystyle a$ and $\displaystyle b$, but is that a problem in $\displaystyle \mathbb{Q}$?
Assuming you give $\displaystyle \mathbb{Q}$ the topology inherited from $\displaystyle \mathbb{R}$ then open sets in $\displaystyle \mathbb{Q}$ are those of the form $\displaystyle A \cap \mathbb{Q}$ where $\displaystyle A$ is open in $\displaystyle \mathbb{R}$. Is $\displaystyle A_1=\{ x \in \mathbb{R} : a<x<b \}$ open? $\displaystyle A_1$ fails to be closed in $\displaystyle \mathbb{R}$ because it lacks $\displaystyle a$ and $\displaystyle b$, but is that a problem in $\displaystyle \mathbb{Q}$? |
I know there's a thread similar to this one here, but the OP is asking the reverse of what I'm trying to find here. I've done some research on the web with very few sources coming up with actual solutions to this problem. What techniques are used to give an approximation of an FIR filter given one or more IIR filters with say the same order?
Approximating the frequency response of an IIR filter or physical process using an FIR filter is useful in learning control. It is quite common to do FIR filter design based on frequency response specifications. You probably want to check out two standard papers on the subject:
[1] J. H. McClellan, T. W. Parks, and L. R. Rabiner, “A computer program for designing optimum FIR linear phase digital filters,” IEEE Trans. Audio Electroacoust., vol. 21, no. 6, pp. 506–526, 1973.
[2] L. R. Rabiner, “Techniques for Designing Finite-Duration Impulse-Response Digital Filters,” IEEE Trans. Commun. Technol., vol. 19, no. 2, pp. 188–195, Apr. 1971.
Broadly, you either do windowed direct sampling of your desired frequency response, or you use one of several optimization methods to achieve similar results. If you disregard the linear phase delay in an FIR, you can practically make the IIR and FIR responses identical, if the FIR filter order is high enough.
As an elaboration on one of the other answers given; if you have an IIR filter $G(z^{-1})$, then you can do FIR filter design by
frequency sampling by taking $M$ samples of the frequency response of $G(z^{-1})$, denoted $\widehat{G}(k)$, and then taking the inverse discrete Fourier transform (IDFT) of $\widehat{G}(k)$. The unit impulse response $g(n)$ of $\widehat{G}(k)$ is \begin{align*} g(n) = \frac{1}{M} \sum\limits_{k=0}^{M-1} \widehat{G}(k) \text{e}^{j \frac{2 \pi k n}{M}} \; , \end{align*}where $n \in [0, M-1] \cap \mathbb{N}_{0}$. The FIR filter is then expressed in the $z$-domain as \begin{multline*} F(z^{-1}) = g(0) + g(1) z^{-1} + ... + g(M-1) z^{-M+1} = \sum_{n=0}^{M-1} g(n) z^{-n} \; . \end{multline*}
The frequency-sampling method results in a unit impulse response which has been convoluted with a rectangular window of the same length in the frequency domain. The frequency response of $F(z^{-1})$ is therefore affected by the large side-lobes of the rectangular window. As a result, the approximation error of $F(z^{-1})$ is large between the frequency samples. This can be alleviated by the use of a window that do not contain abrupt discontinuities in the time domain, and thus have small side-lobes in the frequency domain, i.e., the window smooths the frequency response of $F(z^{-1})$.
A windowed FIR filter $\tilde{h}(n)$ is created from an un-windowed FIR filter $h(n)$ as \begin{align*} \tilde{h}(n) = w(n) h(n) \end{align*} where $w(n)$ is a window function which is non-zero only for $n \in [0, M-1] \cap \mathbb{N}_{0}$. The frequency-domain representation of the window function $W(k)$ is found as \begin{multline*} W(k) = \sum\limits_{n=0}^{M-1} w(n-M/2) \textrm{e}^{-j \frac{2 \pi k n}{M}} = \left[ \sum\limits_{n=0}^{M-1} w(n) \textrm{e}^{-j \frac{2 \pi k n}{M}} \right] \textrm{e}^{-j \frac{2 \pi k}{M} \frac{M}{2} } \; , \end{multline*} where the term $\textrm{e}^{-j (2 \pi k / M) (M/2) }$ comes from the fact that the rectangular window is not centered around $n=0$, but is time-shifted to be centered around $n=M/2$. This phase term will cause distortion of $h(n)$, unless $h(n)$ is also phase-shifted to compensate. The unit impulse response $g(n)$ is therefore phase-shifted before windowing. Due to the circular shift property of the DFT, this can be done by rearranging $g(n)$ such that \begin{equation*} \bar{g}\left( n \right) = \begin{cases} g\left( n + M/2 \right) , & \hspace{-0.6em} n = 0,1, ..., \frac{M}{2} - 1 \\ g\left( n - M/2 \right) , & \hspace{-0.6em} n = \frac{M}{2},\frac{M}{2}+1, ..., M-1 \end{cases} \end{equation*} for the case when $M$ is even. The response is then represented by the FIR filter \begin{equation*} \bar{F}(z^{-1}) = \sum_{n=0}^{M-1} \bar{g}(n) z^{-n} = z^{-M/2} F(z^{-1}) \end{equation*} which is $F(z^{-1})$ delayed by $M/2$ steps. Applying the window $w(n)$ to the time-shifted impulse response $\bar{g}(n)$, \begin{equation*} \tilde{g}(n) = w(n) \bar{g}(n) \; , \end{equation*} the filter \begin{equation*} \tilde{F}(z^{-1}) = W(z^{-1})*\left[ z^{-M/2} F(z^{-1}) \right] \end{equation*} is obtained. Now, $G^{-1}(z^{-1}) \left[ W(z^{-1})*F(z^{-1}) \right] \approx 1$ if the FIR filter is accurate. Note that the phase due to $z^{-M/2}$ is taken out.
The easiest approach is to consider the impulse response of the IIR which is infinite and truncate it somewhere (depending on what order you consider for the approximate FIR filter).
For example, consider the IIR filter with the impulse response $h[n]=a^nu[n]$, where $a$ is positive and $|a|<1$. We can represent it as $$h[n]=\sum_{k=0}^{\infty} a^k\delta[n-k]$$
So the impulse response of the $N$'th order approximation FIR filter would be $$h_{\text{FIR}}[n]=\sum_{k=0}^{N} a^k\delta[n-k]$$
Larger $N$ you consider, closer the FIR will be to the original IIR.
This is an easy approach to simulate the IIR filter's behavior in general. You should be more specific about what aspect of the IIR filter you want to simulate (e.g, pass-band behavior, pass-stop transition, etc.) to get a more specialized answer.
In the example below the IIR filter $$H(z)=\frac{1}{1-0.9z^{-1}}$$ is approximated by three FIR filters of orders $N=10,15,25$ where $$H_{\text{FIR}}(z)=\sum_{k=0}^{N} 0.9^kz^{-k}$$
b1 = 1; a1 = [1 -0.9]; % IIR filter with impulse response (0.9)^n*u[n] [H,w] = freqz(b1,a1); % Plot the frequency response plot(w/pi,10*log10(H),'b','Linewidth',2); hold on; % Plot setup text = 'IIR Filter '; color = ['k','g','r']; N = [10 15 25]; % Three different FIR filter orders for i=1:3 % Truncate the impulse response b2 = []; for n=0:N(i) b2 = [b2 0.9^n]; end [H,w] = freqz(b2,1); % frequency response of FIR filter of order N plot(w/pi,10*log10(H),color(i)); text(i+1,:)=['FIR order = ' num2str(N(i))]; end grid on legend(text) xlabel('Normalized Frequency') ylabel('Magnitude (dB)')
[EDIT: beyond my initial belief that "nobody does that", the OP made me think of situations where this could be useful. Let's start with the obvious]
Given the FIR with $z$-transform: $$\sum_{i=0}^P b_iz^{-i},$$ you can get a very close IIR approximation with:
$$\frac{\sum_{i=0}^P b_iz^{-i}}{1+\sum_{j=1}^Q a_iz^{-i}}$$ with $Q\le P$, and the $a_i$ of very small absolute value, as long as you want to keep "say the same order". An example is given below. I still wonder about the practical interest of such a design.
Perhaps to introduce some instability in an FIR, that is too good in that respect :)
data = randn(1024,1);fFIRNum = [1 2 1];fFIRDen = [1];fIIRDen = [1 0 1e-6];subplot(3,1,1)plot([data])legend('Data')axis tight;grid onsubplot(3,1,2)plot([filter(f1,f2,data),filter(f1,f3,data)])legend('FIR','IIR')axis tight;grid onsubplot(3,1,3)plot([filter(f1,f2,data)-filter(f1,f3,data)])legend('FIR/IIR difference')axis tight;grid on
The obvious put aside, let me imagine a context where an IIR approximation could be useful. Suppose you want to perform a moving average filtering. If you want to make it adaptive, you have to change the length of the window, and a sudden change in the number of averaged samples could affect the smoothed signal abruptly. At minimum, you can only change the window length by $\pm 1$ unit length. The Exponentially Weighted Moving Average (EWMA) $$y(n) = ax(n) + (1 – a)y(n–1)\,.$$ is an IIR. It may mimic FIR rectangular windows of different lengths, depending of the forgetting factor $a$. The Exponentially Weighted Moving Average has been discussed here recently.
One could perform an adaptive EWMA by smoothly varying $a$ in a more continuous way that hoping the window length from sample to sample. One instance can be found in An Adaptive Exponentially Weighted Moving Average Control Chart, 2003. |
In the introduction to chapter VIII of Dieudonné's
Foundations of Modern Analysis (Volume 1 of his 13-volume Treatise on Analysis), he makes the following argument:
Finally, the reader will probably observe the conspicuous absence of a time-honored topic in calculus courses, the “Riemann integral”. It may well be suspected that, had it not been for its prestigious name, this would have been dropped long ago, for (with due reverence to Riemann’s genius) it is certainly quite clear to any working mathematician that nowadays such a “theory” has at best the importance of a mildly interesting exercise in the general theory of measure and integration (see Section 13.9, Problem 7). Only the stubborn conservatism of academic tradition could freeze it into a regular part of the curriculum, long after it had outlived its historical importance. Of course, it is perfectly feasible to limit the integration process to a category of functions which is large enough for all purposes of elementary analysis (at the level of this first volume), but close enough to the continuous functions to dispense with any consideration drawn from measure theory; this is what we have done by defining only the integral of regulated functions (sometimes called the “Cauchy integral”). When one needs a more powerful tool, there is no point in stopping halfway, and the general theory of (“Lebesgue”) integration (Chapter XIII) is the only sensible answer.
I've always doubted the value of the theory of Riemann integration in this day and age. The so-called Cauchy integral is, as Dieudonné suggests, substantially easier to define (and prove the standard theorems about), and can also integrate essentially every function that we might want in a first semester analysis/honors calculus course.
For any other sort of application of integration theory, it becomes more and more worthwhile to develop the fully theory of measure and integration (this is exactly what we did in my second (roughly) course on analysis, so wasn't the time spent on the Riemann integral wasted?).
Why bother dealing with the Riemann (or Darboux or any other variation) integral in the face of Dieudonné's argument?
Edit: The Cauchy integral is defined as follows:
Let $f$ be a mapping of an interval $I \subset \mathbf{R}$ into a Banach space $F$. We say that a continuous mapping $g$ of $I$ into $F$ is a primitive of $f$ in $I$ if there exists a denumerable set $D \subset I$ such that, for any $\xi \in I - D$, $g$ is differentiable at $\xi$ and $g'(\xi) =f(\xi)$ .
If $g$ is any primitive of a regulated function $f$, the difference $g(\beta) - g(\alpha)$, for any two points of $I$, is independent of the particular primitive $g$ which is considered, owing to (8.7.1); it is written $\int_\alpha^\beta f(x) dx$, and called the integral of $f$ between $\alpha$ and $\beta$. (A map $f$ is called regulated provided that there exist one-sided limits at every point of $I$).
Edit 2: I thought this was clear, but I meant this in the context of a course where the theory behind the integral is actually discussed. I do not think that an engineer actually has to understand the formal theory of Riemann integration in his day-to-day use of it, so I feel that the objections below are absolutely beside the point. This question is then, of course, in the context of an "honors calculus" or "calculus for math majors" course. |
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Registro completo - Registros similares 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Registro completo - Registros similares 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Registro completo - Registros similares 2019-05-15 16:57 Registro completo - Registros similares 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Registro completo - Registros similares 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Registro completo - Registros similares 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Registro completo - Registros similares 2019-01-10 15:54 Registro completo - Registros similares 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Registro completo - Registros similares 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Registro completo - Registros similares |
The subgradient algorithm for minimizing a convex function $f(x)$ is the update rule $$ x(t+1) = x(t) - \alpha(t) d(t)$$ where $d(t)$ is any subgradient of $f(x)$ at $x(t)$ and $\alpha(t)$ is a decaying stepsize. Choosing $\alpha(t) = 1/\sqrt{t}$, one usually obtains the bound $$ f \left( \frac{\sum_{j=1}^t \alpha(j) x(j)}{\sum_{j=1}^t \alpha(j)}\right) - f^* \leq O \left( \frac{||x(0) - x^*||_2^2 + L \ln t}{ \sqrt{t}} \right)$$ where $L$ is an upper bound on the norm of all the subgradients that appear by time $t$, and we assume that the optimal value is $f^*$.
There are a number of puzzling things about the subgradient method that I don't feel I have a good handle on.
Why is the bound for a convex combination for $x(1), \ldots, x(t)$ instead of for $f(x(t))$? Is it possible to derive a similar bound for $f(x(t))$? Specifically is it true that $f(x(t))-f^* = O( \log t/t)$ (taking $L$ and $||x(0)-x^*||^2$ as constants in the O-notation)?If not, is there a clear reason why this is unreasonable?
I
thinkI understand why the bound is on $f(\cdot) - f^* $ rather than on the distance $||x(t)-x^*||_2^2$ - it has to do with functions that are nearly flat. For example, if $f(x)$ is a tiny perturbation of the zero function, it may take a while for the subgradient algorithm to find the optimal point. So, suppose we assume that every subgradient that enters the method has norm at least $l>0$. Does that allow us to derive a similar bound on $||x(t) - x^*||_2^2$?
Note: I asked this question on mathoverflow a week ago where it attracted no responses. |
Since $U$ is not contained in $V$, there exists a vector $\mathbf{u}\in U$ but $\mathbf{u} \not \in V$.Similarly, since $V$ is not contained in $U$, there exists a vector $\mathbf{v} \in V$ but $\mathbf{v} \not \in U$.
Seeking a contradiction, let us assume that the union is $U \cup V$ is a subspace of $\R^n$.The vectors $\mathbf{u}, \mathbf{v}$ lie in the vector space $U \cup V$.Thus their sum $\mathbf{u}+\mathbf{v}$ is also in $U\cup V$.This implies that we have either\[\mathbf{u}+\mathbf{v} \in U \text{ or } \mathbf{u}+\mathbf{v}\in V.\]
If $\mathbf{u}+\mathbf{v} \in U$, then there exists $\mathbf{u}’\in U$ such that\[\mathbf{u}+\mathbf{v}=\mathbf{u}’.\]Since the vectors $\mathbf{u}$ and $\mathbf{u}’$ are both in the subspace $U$, their difference $\mathbf{u}’-\mathbf{u}$ is also in $U$. Hence we have\[\mathbf{v}=\mathbf{u}’-\mathbf{u} \in U.\]
However, this contradicts the choice of the vector $\mathbf{v} \not \in U$.
Thus, we must have $\mathbf{u}+\mathbf{v}\in V$.In this case, there exists $\mathbf{v}’ \in V$ such that\[\mathbf{u}+\mathbf{v}=\mathbf{v}’.\]
Since both $\mathbf{v}, \mathbf{v}’$ are vectors of $V$, it follows that\[\mathbf{u}=\mathbf{v}’-\mathbf{v}\in V,\]which contradicts the choice of $\mathbf{u} \not\in V$.
Therefore, we have reached a contradiction. Thus, the union $U \cup V$ cannot be a subspace of $\R^n$.
Related Question.
In fact, the converse of this problem is true.
Problem. Let $W_1, W_2$ be subspaces of a vector space $V$. Then prove that $W_1 \cup W_2$ is a subspace of $V$ if and only if $W_1 \subset W_2$ or $W_2 \subset W_1$.
Union of Two Subgroups is Not a GroupLet $G$ be a group and let $H_1, H_2$ be subgroups of $G$ such that $H_1 \not \subset H_2$ and $H_2 \not \subset H_1$.(a) Prove that the union $H_1 \cup H_2$ is never a subgroup in $G$.(b) Prove that a group cannot be written as the union of two proper […]
Non-Example of a Subspace in 3-dimensional Vector Space $\R^3$Let $S$ be the following subset of the 3-dimensional vector space $\R^3$.\[S=\left\{ \mathbf{x}\in \R^3 \quad \middle| \quad \mathbf{x}=\begin{bmatrix}x_1 \\x_2 \\x_3\end{bmatrix}, x_1, x_2, x_3 \in \Z \right\}, \]where $\Z$ is the set of all integers.[…]
Subset of Vectors Perpendicular to Two Vectors is a SubspaceLet $\mathbf{a}$ and $\mathbf{b}$ be fixed vectors in $\R^3$, and let $W$ be the subset of $\R^3$ defined by\[W=\{\mathbf{x}\in \R^3 \mid \mathbf{a}^{\trans} \mathbf{x}=0 \text{ and } \mathbf{b}^{\trans} \mathbf{x}=0\}.\]Prove that the subset $W$ is a subspace of […] |
In a previous post I attempted to use the katex plugin to render an old post instead of using Mathjax. It seems that was not actually rendered with KaTex, but (I think) it was rendered with the latex keyword handling in the Jetpack plugin, which I also had installed. I’ve customized the katex plugin I have installed to use a different keyword (katex instead of latex).
This is a test of KaTex, the latex rendering engine used for Khan academy. They advertise themselves as much faster than mathjax, but this speed comes with some usability issues.
Here’s a rerendering of an old post, with the latex rendered with WP-KaTeX instead of MathJax-LaTeX.
The post Problem:
Calculate the field due to a spherical shell. The field is
[katex display=”true”]\mathbf{E} = \frac{\sigma}{4 \pi \epsilon_0} \int \frac{(\mathbf{r} – \mathbf{r}’)}{{{\left\lvert{{\mathbf{r} – \mathbf{r}’}}\right\rvert}}^3} da’,[/katex]
where [katex]\mathbf{r}'[/katex] is the position to the area element on the shell. For the test position, let [katex]\mathbf{r} = z \mathbf{e}_3[/katex].
Solution:
We need to parameterize the area integral. A complex-number like geometric algebra representation works nicely.
[katex display=”true”]\begin{aligned}\mathbf{r}’ &= R \left( \sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta \right) \\ &= R \left( \mathbf{e}_1 \sin\theta \left( \cos\phi + \mathbf{e}_1 \mathbf{e}_2 \sin\phi \right) + \mathbf{e}_3 \cos\theta \right) \\ &= R \left( \mathbf{e}_1 \sin\theta e^{i\phi} + \mathbf{e}_3 \cos\theta \right).\end{aligned}[/katex]
Here [katex]i = \mathbf{e}_1 \mathbf{e}_2[/katex] has been used to represent to horizontal rotation plane.
The difference in position between the test vector and area-element is
[katex display=”true”]\mathbf{r} – \mathbf{r}’ = \mathbf{e}_3 {\left({ z – R \cos\theta }\right)} – R \mathbf{e}_1 \sin\theta e^{i \phi},[/katex]
with an absolute squared length of
[katex display=”true”]\begin{aligned}{{\left\lvert{{\mathbf{r} – \mathbf{r}’ }}\right\rvert}}^2 &= {\left({ z – R \cos\theta }\right)}^2 + R^2 \sin^2\theta \\ &= z^2 + R^2 – 2 z R \cos\theta.\end{aligned}[/katex]
As a side note, this is a kind of fun way to prove the old “cosine-law” identity. With that done, the field integral can now be expressed explicitly
[katex display=”true”]\begin{aligned} \mathbf{E} &= \frac{\sigma}{4 \pi \epsilon_0} \int_{\phi = 0}^{2\pi} \int_{\theta = 0}^\pi R^2 \sin\theta d\theta d\phi \frac{\mathbf{e}_3 {\left({ z – R \cos\theta }\right)} – R \mathbf{e}_1 \sin\theta e^{i \phi}} { {\left({z^2 + R^2 – 2 z R \cos\theta}\right)}^{3/2} } \\ &= \frac{2 \pi R^2 \sigma \mathbf{e}_3}{4 \pi \epsilon_0} \int_{\theta = 0}^\pi \sin\theta d\theta \frac{z – R \cos\theta} { {\left({z^2 + R^2 – 2 z R \cos\theta}\right)}^{3/2} } \\ &= \frac{2 \pi R^2 \sigma \mathbf{e}_3}{4 \pi \epsilon_0} \int_{\theta = 0}^\pi \sin\theta d\theta \frac{ R( z/R – \cos\theta) } { (R^2)^{3/2} {\left({ (z/R)^2 + 1 – 2 (z/R) \cos\theta}\right)}^{3/2} } \\ &= \frac{\sigma \mathbf{e}_3}{2 \epsilon_0} \int_{u = -1}^{1} du \frac{ z/R – u} { {\left({1 + (z/R)^2 – 2 (z/R) u}\right)}^{3/2} }. \end{aligned}[/katex]
Observe that all the azimuthal contributions get killed. We expect that due to the symmetry of the problem. We are left with an integral that submits to Mathematica, but doesn’t look fun to attempt manually. Specifically
[katex display=”true”]\int_{-1}^1 \frac{a-u}{{\left({1 + a^2 – 2 a u}\right)}^{3/2}} du = \frac{2}{a^2},[/katex]
if [katex]a > 1[/katex], and zero otherwise, so
[katex display=”true”]\boxed{ \mathbf{E} = \frac{\sigma (R/z)^2 \mathbf{e}_3}{\epsilon_0} }[/katex]
for [katex]z > R[/katex], and zero otherwise.
In the problem, it is pointed out to be careful of the sign when evaluating [katex]\sqrt{ R^2 + z^2 – 2 R z }[/katex], however, I don’t see where that is even useful?
KaTex commentary Conditional patterns, such as: \left\{
\begin{array}{l l}
\frac{\sigma (R/z)^2 \mathbf{e}_3}{\epsilon_0}
& \quad \mbox{if $ z > R $ } \\
0 & \quad \mbox{if $ z < R $ }
\end{array}
\right.
messed up KaTex, resulting in render errors like:
Using \( ... \) within math mode instead of $ ... $ also messed things up. Example:
\left\{
\begin{array}{l l}
\frac{\sigma (R/z)^2 \mathbf{e}_3}{\epsilon_0}
& \quad \mbox{if $ z > R $ } \\
0 & \quad \mbox{if $ z < R $ }
\end{array}
\right.
This resulted in a messed up parse like so:
It looks like it's the mbox that messes things up, and not the array itself, so \text could probably be used instead.
The latex has to be all in one line, or else KaTex renders the newlines explicitly. Example:
Having to condense all my latex onto a single line is one of the reasons I switched from the default wordpress latex engine to mathjax. It was annoying enough that I started paying for my wordpress hosting, and stopped posting on my old free peeterjoot.wordpress.com blog. Using KaTex and having to go back to single line latex would suck!
The rendering looks great, just like mathjax. The Mathjax-Latex wordpress plugin has some support for equation labeling and references. I don't see a way to do those with the WP-KaTex plugin. I can have a large set of macros installed in my default.js matching a subset of what I have in my .sty files. I don't see a way to do that with the WP-KaTex plugin, but perhaps there is just no documented mechanism. KaTex itself does have a macro mechanism. The display isn't left justified like the wordpress latex, and looks decent. |
The Polynomial Rings $\Z[x]$ and $\Q[x]$ are Not Isomorphic Problem 494
Prove that the rings $\Z[x]$ and $\Q[x]$ are not isomoprhic.
Proof.
We give three proofs.
The first two proofs use only the properties of ring homomorphism.
The third proof resort to the units of rings.
If you are familiar with units of $\Z[x]$, then the third proof might be concise and easy to follow.
The First Proof
Assume on the contrary that the rings $\Z[x]$ and $\Q[x]$ are isomorphic.
Let \[\phi:\Q[x] \to \Z[x]\] be an isomorphism.
The polynomial $x$ in $\Q[x]$ is mapped to the polynomial $\phi(x)\in \Z[x]$.
Note that $\frac{x}{2^n}$ is an element in $\Q[x]$ for any positive integer $n$.
Thus we have \begin{align*} \phi(x)&=\phi(2^n\cdot \frac{x}{2^n})\\ &=2^n\phi\left(\frac{x}{2^n}\right) \end{align*} since $\phi$ is a homomorphism.
As $\phi$ is injective, the polynomial $\phi(\frac{x}{2^n})\neq 0$.
Since $\phi(\frac{x}{2^n})$ is a nonzero polynomial with integer coefficients, the absolute values of the nonzero coefficients of $2^n\phi(\frac{x}{2^n})$ is at least $2^n$.
However, since this is true for any positive integer, the coefficients of the polynomial $\phi(x)=2^n\phi(\frac{x}{2^n})$ is arbitrarily large, which is impossible.
Thus, there is no isomorphism between $\Q[x]$ and $\Z[x]$. The Second Proof
Seeking a contradiction, assume that we have an isomorphism
\[\phi:\Q[x] \to \Z[x].\]
Since $\phi$ is a ring homomorphism, we have $\phi(1)=1$.
Then we have \begin{align*} 1&=\phi(1)=\phi \left(2\cdot \frac{1}{2}\right)\\ &=2\phi\left( \frac{1}{2} \right) \end{align*} since $\phi$ is a homomorphism.
Since $\phi\left( \frac{1}{2} \right)\in \Z[x]$, we write
\[\phi\left( \frac{1}{2} \right)=a_nx^n+a_{n-1}x^{n-1}+\cdots a_1x+a_0,\] for some integers $a_0, \dots, a_n$.
Since $2\phi\left( \frac{1}{2} \right)=1$, it follows that
\[2a_n=0, \dots, 2a_1=0, 2a_0=1.\] Since $a_0$ is an integer, this is a contradiction. Thus, such an isomorphism does not exists. Hence $\Q[x]$ and $\Z[x]$ are not isomorphic. The Third Proof
Note that in general the units of the polynomial ring $R[x]$ over an integral domain $R$ is the units $R^{\times}$ of $R$.
Since $\Z$ and $\Q$ are both integral domain, the units are
\[\Z[x]^{\times}=\Z^{\times}=\{\pm 1\} \text{ and } \Q[x]^{\times}=\Q^{\times}=\Q\setminus \{0\}.\] Since every ring isomorphism maps units to units, if two rings are isomorphic then the number of units must be the same.
As seen above, $\Z[x]$ contains only two units although $\Q[x]$ contains infinitely many units.
Thus, they cannot be isomorphic.
Add to solve later |
October 25th, 2014, 09:15 AM
# 1
Newbie
Joined: Sep 2014
From: USA
Posts: 8
Thanks: 0
Finding the area inside a cardiod
I messed up towards the end but can't find my error.
$\displaystyle 2[1/2∫ (π/2) to (-π/2) , (1 -2sin(θ) + sin^2(θ))$d(θ)
I made two integrals
$\displaystyle 2[1/2∫(π/2) to (-π/2), (1-sin(θ))] dθ+ 2[1/2∫(π/2) to (-π/2), (sin^2(θ))] dθ $
Then I derived it.
2[1/2 (((θ)+2cos(θ) + 1-cos(2θ)))] π/2 to - π/2
When evaluate each side I keep getting the wrong answer.
I keep getting 0 or π, I have been working on it for 3 days already.
Last edited by j423; October 25th, 2014 at 09:17 AM.
October 25th, 2014, 09:45 AM
# 2
Math Team
Joined: Jul 2011
From: Texas
Posts: 3,017
Thanks: 1603
Quote:
1-cos(2θ) as an antiderivative?
For your last term in your original integrand ...
$\displaystyle \sin^2{\theta} = \frac{1-\cos(2\theta)}{2}$
antiderivative is $\displaystyle \frac{1}{2}\left(\theta - \frac{\sin(2\theta)}{2}\right)$
October 25th, 2014, 10:11 AM
# 3
Math Team
Joined: Dec 2013
From: Colombia
Posts: 7,685
Thanks: 2665
Math Focus: Mainly analysis and algebra
I can't see where the 2 in the $2\cos \theta$ term came from either.
Tags area, cardiod, finding, inside
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Area of sphere inside cube morjo Algebra 2 March 27th, 2013 02:09 PM Finding the coordinates of the points (see inside) supremegrandruler Algebra 5 June 29th, 2012 06:50 PM finding the area of the .... lalalahappy Algebra 1 May 13th, 2012 06:37 PM Finding Side length of Rhombus inside Isosceles Triangle. Spaghett Algebra 8 June 24th, 2011 07:52 PM Finding the inside meaurement of a bay window ratcat Algebra 7 December 13th, 2008 05:42 AM |
It is often said [e.g. Atiyah, "Bordism and Cobordism" (1961)] that the Thom spectrum $MSO(i)$ represents oriented cobordism, in the following sense: \begin{eqnarray} MSO^n(X,Y) &:=& \lim_{i \rightarrow \infty} \langle \Sigma^{i-n}(X/Y), MSO(i) \rangle\\ &=& \lim_{i \rightarrow \infty} \langle X/Y, \Omega^{i-n} MSO(i) \rangle\\ &=& \langle X/Y, \Omega^{i-n} MSO(i) \rangle, ~~\text{large}~i, \end{eqnarray} for finite CW pairs $(X,Y)$. where $\Sigma$ is the reduced suspension, $\Omega$ is the usual loop space functor, and $\langle-, -\rangle$ is the homotopy classes of pointed maps. The direct limit was taken with respect to the maps \begin{equation} \langle \Sigma^{i-n}(X/Y), MSO(i) \rangle \rightarrow \langle \Sigma^{i+1-n}(X/Y), \Sigma MSO(i) \rangle \xrightarrow{f_{i*}} \langle \Sigma^{i+1-n}(X/Y), MSO(i+1) \rangle. ~ (1) \end{equation} where $f_{i}:\Sigma MSO(i) \rightarrow MSO(i+1)$ is the natural map mentioned in Atiyah.
By the Brown representability theorem, one should be able to represent oriented cobordism in the usual sense that \begin{equation} MSO^n(X,Y) \stackrel{?}{\cong} \langle X/Y, K_n \rangle ~~~ (2) \end{equation} for some $\Omega$-spectrum $\{K_n\}$. So this is something like moving the direct limit inside $\langle -,- \rangle$.
My question is: If $K_n$ exists, then what is it? Or is it because the Brown representability theorem hypothesized a generalized cohomology theory on all CW pairs, that there isn't an $\Omega$-spectrum $\{K_n\}$ representing oriented cobordism, which is defined only for finite CW pairs?
I was able to show that (1) is actually the same, via adjunction, as the maps \begin{equation} \langle X/Y, \Omega^{i-n} MSO(i) \rangle \rightarrow \langle X/Y, \Omega^{i+1-n}\Sigma MSO(i) \rangle \rightarrow \langle X/Y, \Omega^{i+1-n} MSO(i+1) \rangle ~~~ (3) \end{equation} induced by \begin{equation} MSO(i) \xrightarrow{\eta_{MSO(i)}} \Omega \Sigma MSO(i) \xrightarrow{\Omega(f_i)} \Omega MSO(i+1), ~~~(4) \end{equation} where $\eta_Y:Y \rightarrow \Omega \Sigma Y$ is the unit of the adjunction $\Sigma \dashv \Omega$. Can we go from here to contruct $K_n$ out of $MSO(i)$?
Sorry for this potentially elementary question. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
The question I’d like to explore in this post is how Ampere’s law, the relationship between the line integral of the magnetic field to current (i.e. the enclosed current)
\begin{equation}\label{eqn:flux:20} \oint_{\partial A} d\Bx \cdot \BH = -\int_A \ncap \cdot \BJ, \end{equation} generalizes to geometric algebra where Maxwell’s equations for a statics configuration (all time derivatives zero) is \begin{equation}\label{eqn:flux:40} \spacegrad F = J, \end{equation} where the multivector fields and currents are \begin{equation}\label{eqn:flux:60} \begin{aligned} F &= \BE + I \eta \BH \\ J &= \eta \lr{ c \rho – \BJ } + I \lr{ c \rho_\txtm – \BM }. \end{aligned} \end{equation} Here (fictitious) the magnetic charge and current densities that can be useful in antenna theory have been included in the multivector current for generality.
My presumption is that it should be possible to utilize the fundamental theorem of geometric calculus for expressing the integral over an oriented surface to its boundary, but applied directly to Maxwell’s equation. That integral theorem has the form
\begin{equation}\label{eqn:flux:80} \int_A d^2 \Bx \boldpartial F = \oint_{\partial A} d\Bx F, \end{equation} where \( d^2 \Bx = d\Ba \wedge d\Bb \) is a two parameter bivector valued surface, and \( \boldpartial \) is vector derivative, the projection of the gradient onto the tangent space. I won’t try to explain all of geometric calculus here, and refer the interested reader to [1], which is an excellent reference on geometric calculus and integration theory.
The gotcha is that we actually want a surface integral with \( \spacegrad F \). We can split the gradient into the vector derivative a normal component
\begin{equation}\label{eqn:flux:160} \spacegrad = \boldpartial + \ncap (\ncap \cdot \spacegrad), \end{equation} so \begin{equation}\label{eqn:flux:100} \int_A d^2 \Bx \spacegrad F = \int_A d^2 \Bx \boldpartial F + \int_A d^2 \Bx \ncap \lr{ \ncap \cdot \spacegrad } F, \end{equation} so \begin{equation}\label{eqn:flux:120} \begin{aligned} \oint_{\partial A} d\Bx F &= \int_A d^2 \Bx \lr{ J – \ncap \lr{ \ncap \cdot \spacegrad } F } \\ &= \int_A dA \lr{ I \ncap J – \lr{ \ncap \cdot \spacegrad } I F } \end{aligned} \end{equation}
This is not nearly as nice as the magnetic flux relationship which was nicely split with the current and fields nicely separated. The \( d\Bx F \) product has all possible grades, as does the \( d^2 \Bx J \) product (in general). Observe however, that the normal term on the right has only grades 1,2, so we can split our line integral relations into pairs with and without grade 1,2 components
\begin{equation}\label{eqn:flux:140} \begin{aligned} \oint_{\partial A} \gpgrade{d\Bx F}{0,3} &= \int_A dA \gpgrade{ I \ncap J }{0,3} \\ \oint_{\partial A} \gpgrade{d\Bx F}{1,2} &= \int_A dA \lr{ \gpgrade{ I \ncap J }{1,2} – \lr{ \ncap \cdot \spacegrad } I F }. \end{aligned} \end{equation}
Let’s expand these explicitly in terms of the component fields and densities to check against the conventional relationships, and see if things look right. The line integrand expands to
\begin{equation}\label{eqn:flux:180} \begin{aligned} d\Bx F &= d\Bx \lr{ \BE + I \eta \BH } = d\Bx \cdot \BE + I \eta d\Bx \cdot \BH + d\Bx \wedge \BE + I \eta d\Bx \wedge \BH \\ &= d\Bx \cdot \BE – \eta (d\Bx \cross \BH) + I (d\Bx \cross \BE ) + I \eta (d\Bx \cdot \BH), \end{aligned} \end{equation} the current integrand expands to \begin{equation}\label{eqn:flux:200} \begin{aligned} I \ncap J &= I \ncap \lr{ \frac{\rho}{\epsilon} – \eta \BJ + I \lr{ c \rho_\txtm – \BM } } \\ &= \ncap I \frac{\rho}{\epsilon} – \eta \ncap I \BJ – \ncap c \rho_\txtm + \ncap \BM \\ &= \ncap \cdot \BM + \eta (\ncap \cross \BJ) – \ncap c \rho_\txtm + I (\ncap \cross \BM) + \ncap I \frac{\rho}{\epsilon} – \eta I (\ncap \cdot \BJ). \end{aligned} \end{equation}
We are left with
\begin{equation}\label{eqn:flux:220} \begin{aligned} \oint_{\partial A} \lr{ d\Bx \cdot \BE + I \eta (d\Bx \cdot \BH) } &= \int_A dA \lr{ \ncap \cdot \BM – \eta I (\ncap \cdot \BJ) } \\ \oint_{\partial A} \lr{ – \eta (d\Bx \cross \BH) + I (d\Bx \cross \BE ) } &= \int_A dA \lr{ \eta (\ncap \cross \BJ) – \ncap c \rho_\txtm + I (\ncap \cross \BM) + \ncap I \frac{\rho}{\epsilon} -\PD{n}{} \lr{ I \BE – \eta \BH } }. \end{aligned} \end{equation} This is a crazy mess of dots, crosses, fields and sources. We can split it into one equation for each grade, which will probably look a little more regular. That is \begin{equation}\label{eqn:flux:240} \begin{aligned} \oint_{\partial A} d\Bx \cdot \BE &= \int_A dA \ncap \cdot \BM \\ \oint_{\partial A} d\Bx \cross \BH &= \int_A dA \lr{ – \ncap \cross \BJ + \frac{ \ncap \rho_\txtm }{\mu} – \PD{n}{\BH} } \\ \oint_{\partial A} d\Bx \cross \BE &= \int_A dA \lr{ \ncap \cross \BM + \frac{\ncap \rho}{\epsilon} – \PD{n}{\BE} } \\ \oint_{\partial A} d\Bx \cdot \BH &= -\int_A dA \ncap \cdot \BJ \\ \end{aligned} \end{equation} The first and last equations could have been obtained much more easily from Maxwell’s equations in their conventional form more easily. The two cross product equations with the normal derivatives are not familiar to me, even without the fictitious magnetic sources. It is somewhat remarkable that so much can be packed into one multivector equation: \begin{equation}\label{eqn:flux:260} \oint_{\partial A} d\Bx F = I \int_A dA \lr{ \ncap J – \PD{n}{F} }. \end{equation} References
[1] A. Macdonald.
Vector and Geometric Calculus. CreateSpace Independent Publishing Platform, 2012. |
Forgot password? New user? Sign up
Existing user? Log in
This is Calvin Lin's current status:
Gödel has proved it that there is no logical foundation of Mathematics. Discuss.
Gödel has proved it that there is no logical foundation of Mathematics. Discuss.
So, discuss!
Note by Mursalin Habib 5 years, 5 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Let me get the ball rolling
Godel's incompleteness theorem explains that there are inherent limitations in mathematical structure. In particular, it shows that there is no proof which demonstrates that arithmetic is consistent.
The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an "effective procedure" (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of the natural numbers (arithmetic). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system.
Food for thought: If we were asked to prove a given statement, but are unable to do so after an extended period of time, is it possible that the statement is unprovable?
Food for thought: If we were asked to prove a given statement, but are unable to do so after an extended period of time, is it possible that the statement is unprovable?
The second incompleteness theorem, an extension of the first, shows that such a system cannot demonstrate its own consistency. This tells us that there are statements, in which we do not know if such a statement could be proved within the system. A possible candidate for such a statement, is the Riemann Hypothesis. There have been proofs of a statement X, which first consider the case where the Riemann hypothesis is true and demonstrates X, and next consider the case where the Riemann hypothesis is false and demonstrates X.
Food for thought: If we were asked to prove a given statement, but are unable to do so after an extended period of time, is it possible that we do not know if the statement is unprovable?
Food for thought: If we were asked to prove a given statement, but are unable to do so after an extended period of time, is it possible that we do not know if the statement is unprovable?
Log in to reply
@Calvin Lin I have some questions as well.
(1) Does a statement have to be either true or false?
(1) Does a statement have to be either true or false?
What about statements like "This statement is false"?
(2) If we prove that a statement AAA can not be false, is it okay to say that we've proved AAA to be true?
(2) If we prove that a statement AAA can not be false, is it okay to say that we've proved AAA to be true?
We do that all the time, don't we? When we prove that 2\sqrt{2}2 is irrational, we show that the assumption that 2\sqrt{2}2 is a rational number leads to a contradiction and therefore 2\sqrt{2}2 can not be rational.
Finally the most important question:
(3) What is wrong with the following reasoning?
(3) What is wrong with the following reasoning?
From what I understand about the incompleteness theorems, the first theorem is proved by constructing a 'Gödel Sentence', GGG, which goes along the way of "This statement is unprovable given this set of axioms". Now this statement can't be false because that would cause a paradox. In other words, it is impossible for GGG to be false. Now if the answer to question (2) is yes, then doesn't it mean that we've proved GGG to be true? But that would lead to a paradox as well! So, GGG is neither true nor false, very much like the statement in question (1).
So is there anything wrong with this reasoning? I should add that I don't have much background on the incompleteness theorems, so I could be wrong.
I'd suppose, I'd agree with both of those questions. Either that or with the current level of our mathematics, we can't seem to prove something. Or perhaps we might require to change some definitions of things to prove them. Hmm...
Oh, this has just been reshared. Well, let me jump right in. Godel's Incompleness Theorems is a popular subject of discussion, but what's not mentioned often enough is the connection with Turing's Halting Problem. For people familiar with computer programming, it might be easier to understand the former in terms of the latter. Anyone who has done programming knows how easily it happens for a program that hasn't been debugged to "get stuck in some kind of an endless loop", forcing a reboot. It's not true that programs that crash do so because it's in an endless loop, but what can happen is that the program never halts, and we have no idea if it ever will. Turing was the first to prove that there exists no algorithm or systematic method of determining whether or not any given code would not suffer from this, thus condemning generations of programmers to the everlasting worry that their code may fail. Numerous programming techniques and restrictions on how code may be written and compiled have been advanced to "stay within the box of coding that are known not to suffer from this halting problem". The analogy here in mathematics is that mathematicians work to identify theorems that are proven, or problems that are known to have solutions (but yet found). But unfortunately, just as with programmers not having that confidence in arbitrarily written coding won't fail, mathematicians have to live with the reality that there are many conjectures, the proofs of which may "never halt", i.e., not determinable in a finite time.
However, to make the claim, "There are no logical foundation of Mathematics" is throwing the baby out with the bathwater. Just as modern programmers have to work with coding principles and restrictions to avoid the halting problems, mathematicians simply have to accept that there is a smaller subset of "all possible conjectures" that either have proofs or at least not shown to be undecidable, that make for the mathematics we have and can use, and relegate the rest to the Oort Cloud of undecidables. There's still a practical, useful mathematics, and there's a foundation for it, within that limited sphere. It had been an illusion to begin with that any utterable conjecture necessarily is either true or false (the Law of Excluded Middle---which had to be made an AXIOM), so now, as with the discovery of Non-Euclidean Geometry and its utility, we have a wider, and perhaps richer understanding of mathematics, one that affords more possibilities, not less.
This makes me think of the Euclid's Elements, and his 5 axioms.
Heheh.
Whaaaa?
Now @Calvin Lin's current status:
I don't even want to think about that. 1+1=21+1=21+1=2. Bam. Done.
Try changing bases. :-0 It won't stay consistent, I think that's kinda what it says. Perhaps...
I've had this on a math test.
"What is 1+1?1+1?1+1? Base 10,10,10, no tricks!"
The answer is 101010 because the entire question was written in base 2,2,2, including the part that told you what the base was. I was really happy when I got that. (It was extra credit)
@Trevor B. – That's awesome. :D
Not really, what it refers to is the fact that you must take certain things on faith, you can't prove it i.e. axioms. When changing bases arithmetic is still consistent, you can do any calculation in any base and still get the same answer, just in a different base. 1+1=10 in base 2, however 10 in base 2 equals 2 in base 10, you still get the same answer, but it just "looks" different because they are in different bases.
@Nikhil Pandya – That's... certainly correct.
I agree
Ha, ha funny.
Although, you could "prove" that 1+1=21+1=21+1=2... Hmm...
Principia Mathematica\textit{Principia Mathematica}Principia Mathematica says "The above proposition is occasionally useful." when referring to that equation.
Yeah...... I saw a mind blowing proof somewhere on this site!
There is a proof that 1=2, but it's rather faulty.
Great status, nice...
Problem Loading...
Note Loading...
Set Loading... |
I would like to create a math formula as follows. How do I write it in LaTeX?
I recommend reading one of the guides listed here: What are good learning resources for a LaTeX beginner?.
\documentclass{article}\usepackage{amsmath}\begin{document}\[f\sim\mathcal{GP}(\mu(x),K(\mathbf{x},\mathbf{x}';\theta))\] \end{document}
Next time, please add a minimal working example (MWE) of what you tried. |
May 9th, 2017, 09:41 AM
# 1
Newbie
Joined: May 2017
From: California
Posts: 1
Thanks: 0
Green's Theorem and Line Integrals
Hey Everybody,
So I'm doing some line integrals and checking my work with Green's Theorem, but I'm not getting the same answer. Here's the problem:
Evaluate the line integral F · dr over C, where C is the counterclockwise oriented triangle with vertices (0, 0),(2, 0), and (2, 2) and
$\displaystyle
F(x, y) = <5 − 2xy − y^2, 2xy − x^2>
$
The answer is 16/3, which I get when I use Green's Theorem, but when I try to directly calculate the line integral I get 14/3. Here is my process:
First I parametrize the region C.
x = x
y = x
where 0<= x <= 2
Then I solve:
r(x) = <x,x>
r'(x) = <1, 1>
F(r(x)) = <5 - 2x^2 - x^2, 2x^2 - x^2>
F(r(x)) = <5 - 3x^2, x^2>
F(r(x)) · r'(x) = 5 - x^2
The integral of that with respect to x is 14/3
Please let me know what I am doing wrong. Thanks for your help!
May 9th, 2017, 11:33 AM
# 2
Senior Member
Joined: Sep 2015
From: USA
Posts: 2,553
Thanks: 1403
I'm not really sure what you're doing there.
There are three pieces to the curve being integrated over.
$r1=(2t,0),~t \in [0,1]$
$r2=(2,0)+(0,2t),~t \in [0,1]$
$r3=(2,2)+(-2t,-2t),~t \in [0,1]$
$\dot{r1} = (2,0)$
$\dot{r2}=(0,2)$
$\dot{r3}=(-2,-2)$
$f=(5-2xy-y^2,~2xy - x^2)$
on $r1,~f = (5, -4t^2)$
$\dot{r1}\cdot f = 10$
$\ell_1 = \displaystyle \int_0^1~10~dt = 10$
I leave it to you to finish the other two pieces.
$\dfrac {16}{3}$ is the correct answer.
Last edited by romsek; May 9th, 2017 at 11:54 AM.
May 9th, 2017, 01:34 PM
# 3
Member
Joined: Feb 2015
From: Southwest
Posts: 96
Thanks: 24
Just an FYI, that dot above r1, r2, and r3 means the first derivative. I didn't know that for the longest time, until I started a physics class.
I'd also recommend in the future you don't try to parameterize with a variable that already exists in the original function, it'll only lead to tears in the future. It seems to be common convention when doing line integrals of one parameter to use variable t.
Romsek has it dead on with his solution. Remember when doing line integrals you have to follow the line itself and in this example, it tells you have follow it in a counter-clockwise direction. When doing line integral of force field, orientation is very very important (it is the direction you are traveling with or against those forces).
Also, remember to check for conservative fields, closed loop line integral in a conservative field is always 0.
if $\displaystyle \frac{\partial f}{\partial y}=\frac{\partial g}{\partial x}$ when $\displaystyle F =(f(x,y),g(x,y))$, then the field is conservative.
$\displaystyle \oint_{C} F\cdot dr=0$, if F is conservative.
Hope this helps you.
Last edited by phrack999; May 9th, 2017 at 01:38 PM.
Tags green, integrals, line, theorem
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Stokes's Theorem and Green's Theorem djcan80 Calculus 2 August 24th, 2016 07:28 PM Green's theorem JINS K JOHN Calculus 1 September 16th, 2014 10:42 AM I need yr help in Green theorem Mechano Real Analysis 0 February 8th, 2012 06:49 AM Green's Theorem veronicak5678 Calculus 2 May 2nd, 2009 08:00 AM I need yr help in Green theorem Mechano Abstract Algebra 0 December 31st, 1969 04:00 PM |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
Regularity in Campanato spaces for solutions of fully nonlinear elliptic systems
1.
Dipartimento di Informatica, Matematica, Elettronica e Trasporti, Università degli Studi Mediterranea di Reggio Calabria, Loc. Feo di Vito, I-89060 Reggio Calabria, Italy
2.
Dipartimento di Matematica “L. Tonelli”, Università di Pisa, Largo B. Pontecorvo, 5. I-56127 Pisa, Italy
We show that there exist $\varepsilon, \overline{\varepsilon}\in (0,1),$ ($\varepsilon,\overline{\varepsilon}$ depend on $\gamma$ and $\delta$), such that for any $\zeta \in (0,\overline{\varepsilon}\, n) ,$ and $ \mu \in( 0,\lambda],$ with $ \mu< (2b+\zeta)\wedge [\epsilon\,(n+2)],$ we have $D^2 u \in {\mathcal L}^{2,\mu}(\Omega,\mathbb{R}^{n^2N}),$ where $\varepsilon$ and $\overline{\varepsilon}$ depend on the constants appearing in Condition $A_x.$
Mathematics Subject Classification:Primary: 35J55, 35B65; Secondary: 35J5. Citation:Luisa Fattorusso, Antonio Tarsia. Regularity in Campanato spaces for solutions of fully nonlinear elliptic systems. Discrete & Continuous Dynamical Systems - A, 2011, 31 (4) : 1307-1323. doi: 10.3934/dcds.2011.31.1307
References:
[1]
X. Cabré and L. A. Caffarelli, "Fully Nonlinear Elliptic Equations,",
[2]
S. Campanato,
[3]
S. Campanato,
[4]
S. Campanato,
[5] [6] [7]
M. Giaquinta and G. Modica,
[8]
E. Giusti, "Equazioni Ellittiche del Secondo Ordine,",
[9]
S. Fu\vcík, O. John and A. Kufner, "Function Spaces,",
[10]
M. Marino and A. Maugeri,
[11]
A. Maugeri, D. K. Palagachev and L. G. Softova, "Elliptic and Parabolic Equations with Discontinuous Coefficients,",
[12]
A. Tarsia,
[13]
show all references
References:
[1]
X. Cabré and L. A. Caffarelli, "Fully Nonlinear Elliptic Equations,",
[2]
S. Campanato,
[3]
S. Campanato,
[4]
S. Campanato,
[5] [6] [7]
M. Giaquinta and G. Modica,
[8]
E. Giusti, "Equazioni Ellittiche del Secondo Ordine,",
[9]
S. Fu\vcík, O. John and A. Kufner, "Function Spaces,",
[10]
M. Marino and A. Maugeri,
[11]
A. Maugeri, D. K. Palagachev and L. G. Softova, "Elliptic and Parabolic Equations with Discontinuous Coefficients,",
[12]
A. Tarsia,
[13]
[1]
Pierpaolo Soravia.
Uniqueness results for fully nonlinear degenerate elliptic equations with discontinuous coefficients.
[2]
Limei Dai.
Entire solutions with asymptotic behavior of
fully nonlinear uniformly elliptic equations.
[3]
Hongxia Zhang, Ying Wang.
Liouville results for fully nonlinear integral elliptic equations in exterior domains.
[4]
Fabio Camilli, Claudio Marchi.
On the convergence rate in multiscale homogenization of fully nonlinear elliptic problems.
[5]
Chunhui Qiu, Rirong Yuan.
On the Dirichlet problem for fully nonlinear elliptic equations on annuli of metric cones.
[6]
Robert Jensen, Andrzej Świech.
Uniqueness and existence of maximal and minimal solutions of fully nonlinear elliptic PDE.
[7]
Martino Bardi, Paola Mannucci.
On the Dirichlet problem for non-totally degenerate fully nonlinear elliptic equations.
[8]
Antonio Vitolo, Maria E. Amendola, Giulio Galise.
On the uniqueness of blow-up solutions of fully nonlinear elliptic equations.
[9] [10]
Shigeaki Koike, Andrzej Świech.
Local maximum principle for $L^p$-viscosity solutions of
fully nonlinear elliptic PDEs with unbounded coefficients.
[11]
Paola Mannucci.
The Dirichlet problem for fully nonlinear elliptic equations non-degenerate in a fixed direction.
[12]
Fabio Punzo.
Phragmèn-Lindelöf principles for fully nonlinear
elliptic equations with unbounded coefficients.
[13]
Italo Capuzzo Dolcetta, Antonio Vitolo.
Glaeser's type gradient estimates for non-negative solutions of fully nonlinear elliptic equations.
[14]
Luca Rossi.
Non-existence of positive solutions of fully nonlinear elliptic equations in unbounded domains.
[15] [16] [17]
Luigi C. Berselli, Carlo R. Grisanti.
On the regularity up to the boundary for certain nonlinear elliptic systems.
[18]
Yuri Latushkin, Jan Prüss, Ronald Schnaubelt.
Center manifolds and dynamics near equilibria of quasilinear parabolic systems with fully nonlinear boundary conditions.
[19]
Nobuyuki Kato, Norio Kikuchi.
Campanato-type boundary estimates for Rothe's scheme to parabolic partial differential systems with constant coefficients.
[20]
Chenchen Mou.
Nonlinear elliptic systems involving the fractional Laplacian in the unit ball and on a half space.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Assume we are given smooth functions $f, g: U \to \mathbb{C}$, where $0 \in U \subset \mathbb{R}^n$ is open and $0 \in g^{-1}(0) \subset \{x_n = 0\}$. Furthermore, suppose that $\nabla g \neq 0$ on the set $g^{-1}(0)$ and let $h = \frac{f}{g}$ defined on $U \setminus g^{-1}(0)$. Assume that all the partial derivatives $D^\alpha h$ for $k \geq 0$ are bounded in $\{x_n > 0\}$. Does there exist a smooth extension $\tilde{h}$ of $h$ to the whole of $U$?
Here's something that I've tried (my intuition is that such an extension exists):
Because all the derivatives are bounded in $x_n > 0$ we easily conclude that $h$ is uniformly continuous and hence there exists a unique smooth continuation to $\{x_n \geq 0\}$. However, I wasn't able to relate this to the {x_n < 0} region.
For n = 1, this is basically l'Hospital's rule and such an extension clearly exists. We can use this on the set of lines given by varying $x_n$ and keeping $(x_1, \dotso, x_{n - 1})$ constant (and assuming $\frac{\partial g}{\partial x_n} \neq 0$).
If there's an open set $V \subset g^{-1}(0)$, then we may use Taylor's theorem to conclude that $g = x_n g_1$ with $g_1 \neq 0$ near $V$. Since $h$ is bounded, we must have $f = x_n f_1$ near $V$ and so $h$ extends. How do we then extend over the remaining, nowhere dense set? |
We met some common types of inductive argument back in Chapter 2. Now that we know how to work with probability, let’s use what we’ve learned to sharpen our understanding of how those arguments work.
Generalizing from observed instances was the first major form of inductive argument we encountered. Suppose you want to know what colour a particular species of bird tends to be. Then you might go out and look at a bunch of examples:
I’ve seen \(10\) ravens and they’ve all been black.
Therefore, all ravens are black.
How strong is this argument?
Observing ravens is a lot like sampling from an urn. Each raven is a marble, and the population of all ravens is the urn. We don’t know what nature’s urn contains at first: it might contain only black ravens, or it might contain ravens of other colours too. To assess the argument’s strength, we have to calculate \(\p(A \given B_1 \wedge B_2 \wedge \ldots \wedge B_{10})\): the probability that all ravens in nature’s urn are black, given that the first raven we observed was black, and the second, and so on, up to the tenth raven.
We learned how to solve simple problems of this form in the previous chapter. For example, imagine you face another of our mystery urns, and this time there are two equally likely possibilities: \[ \begin{aligned} A &: \mbox{The urn contains only black marbles.} \\ \neg A &: \mbox{The urn contains an equal mix of black and white marbles.} \\ \end{aligned} \] If we do two random draws with replacement, and both are black, we calculate \(\p(A \given B_1 \wedge B_2)\) using Bayes’ theorem: \[ \begin{aligned} \p(A \given B_1 \wedge B_2) &= \frac{\p(B_1 \wedge B_2 \given A)\p(A)}{\p(B_1 \wedge B_2 \given A) \p(A) + \p(B_1 \wedge B_2 \given \neg A) \p(\neg A)} \\ &= \frac{(1)^2(1/2)}{(1)^2(1/2) + (1/2)^2(1/2)}\\ &= 4/5. \end{aligned} \] If we do a third draw with replacement, and it too comes up black, we replace the squares with cubes. On the fourth draw we’d raise to the fourth power. And so on. When we get to the tenth black draw, the calculation becomes: \[ \begin{aligned} \p(A \given B_1 \wedge B_2) &= \frac{(1)^{10}(1/2)}{(1)^{10}(1/2) + (1/2)^{10}(1/2)}\\ &= 1,024/1,025\\ &\approx .999. \end{aligned} \] So after ten black draws, we can be about \(99.9\%\) certain the urn contains only black marbles.
But that doesn’t mean our argument that all ravens are black is \(99.9\%\) strong!
There are two major limitations to our urn analogy.
The first limitation is that the ravens we observe in real life aren’t randomly sampled from nature’s “urn”. We only observe ravens in certain locations, for example. But our solution to the urn problem relied on random sampling. For example, we assumed \(\p(B_1 \given \neg A) = 1/2\) because the black marbles are just as likely to be drawn as the white ones, if there are any white ones.
If there are white ravens in the world though, they might be limited to certain locales.4 In fact there are white ravens, especially in one area of Vancouver Island. So the fact we’re only observing ravens in our part of the world could make a big difference to what we find. It matters whether your sample really is random.
The second limitation is that we pretended there were only two possibilities: either all the marbles in the urn are black, or half of them are. And, accordingly, we assumed there was already a 1/2 chance all the marbles are black, before we even looked at any of them.
In real life though, when we encounter a new species, it could be that \(90\%\) of them are black, or \(31\%\), or \(42.718\%\), or any portion from \(0\%\) to \(100\%\). So there are many, many more possibilities. The possibility that
all members of the new species (\(100\%\)) are black is just one of these many possibilities. So it would start with a much lower probability than \(1/2\).
In a famous calculation, Laplace showed how to solve the second problem. The result was his famous rule of succession: if you do \(n\) random draws and they’re all black, the probability the next draw will be black is \((n+1)/(n+2)\). Laplace’s calculation is too advanced for this book. But statistician Joe Blitzstein gives a nice explanation in this video, for students who have more background in probability (specifically random variables and probability density functions).
We could make our analysis more realistic by taking these complications into account. But the calculations would get ugly, and we’d have to use calculus to solve the problem. This is the kind of technical topic you’d cover in a math or statistics class on probability. But it’s not the kind thing this book is about.
For our purposes, the key ideas that matter are as follows. First, the strength of an inductive argument is a question of conditional probability: how probable is the argument’s conclusion given its premises? Second, an argument that generalizes from observed instances is similar to an urn problem, where we guess the contents of the urn by repeated sampling. And third, we know how to solve simple versions of these urn problems using Bayes’ theorem. In principle, we could also solve more complicated and realistic versions using the same fundamental ideas, the math would just be harder.
Another common form of inductive argument we met in Chapter 2 was Inference to the Best Explanation. An example:
My car won’t start and the gas gauge reads ‘empty’.
Therefore, my car is out of gas.
My car being out of gas is a very good explanation of the facts that it won’t start and the gauge reads ‘empty’. So this seems like a pretty strong argument.
How do we understand its strength using probability? This is actually a controversial topic, currently being studied by researchers. There are different, competing theories about how Inference to the Best Explanation fits into probability theory. So we’ll just look at one, popular way of understanding things.
Let’s start by thinking about what makes an explanation a good one.
A good explanation should account for all the things we’re trying to explain. For example, if we’re trying to explain why my car won’t start and the gauge reads ‘empty’, I’d be skeptical if my mechanic said it’s because the brakes are broken. That doesn’t account for any of the symptoms! I’d also be skeptical if they said the gas gauge was broken. That might fit okay with one of the symptoms (the gauge reads ‘empty’), but it doesn’t account for the fact the car won’t start.
The explanation that my car is out of gas, however, fits both symptoms. It would account for both the ‘empty’ reading on the gauge and the car’s refusal to start.
A good explanation should also fit with other things I know. For example, suppose my mechanic tries to explain my car troubles by saying that both the gauge and the ignition broke down at the same time. But I know my car is new, it’s a highly reliable model, and it was recently serviced. So my mechanic’s explanation doesn’t fit well with the other things I know. It’s not a very good explanation.
We have two criteria now for a good explanation:
These criteria match up with terms in Bayes’ theorem. Imagine we have some evidence \(E\) we’re trying to explain, and some hypothesis \(H\) that’s meant to explain it. Bayes’ theorem says: \[ \p(H \given E) = \frac{\p(H)\p(E \given H)}{\p(E)}. \] How probable is our explanation \(H\) given our evidence \(E\)? Well, the larger the terms in the numerator are, the higher that probability is. And the terms in the numerator correspond to our two criteria for a good explanation.
\(\p(E \given H)\) corresponds to how well our hypothesis \(H\) accounts for our evidence \(E\). If \(H\) is the hypothesis that the car is out of gas, then \(\p(E \given H) \approx 1\). After all, if there’s no gas in the car, it’s virtually guaranteed that it won’t start and the gauge will read ‘empty’. (It’s not perfectly guaranteed because the gauge could be broken after all, though that’s not very likely.)
\(\p(H)\) corresponds to how well our hypothesis fits with other things we know. For example, suppose I know it’s been a while since I put gas in the car. If \(H\) is the hypothesis that the car is out of gas, this fits well with what I already know, so \(\p(H)\) will be pretty high.
Whereas if \(H\) is the hypothesis that the gauge and the ignition both broke down at the same time, this hypothesis starts out pretty improbable given what else I know (it’s a new car, a reliable model, and recently serviced). So in that case, \(\p(H)\) would be low
So the better \(H\) accounts for the evidence, the larger \(\p(E \given H)\) will be. And the better \(H\) fits with my background information, the larger \(\p(H)\) will be. Thus, the better \(H\) is as an explanation, the larger \(\p(H \given E)\) will be. And thus the stronger \(E\) will be as an argument for \(H\).
What about the last term in Bayes’ theorem though, the denominator \(\p(E)\)? It corresponds to a virtue of good explanations too!
Figure 10.1: The hammer/feather experiment was performed on the moon in 1971. See the full video here.
Figure 10.1: It’s also been performed in vacuum chambers here on earth. A beautifully filmed example is available on YouTube, courtesy of the BBC.
Scientists love theories that explain the unexplained. For example, Newton’s theory of physics is able to explain why a heavy object and a light object, like a hammer and feather, fall to the ground at the same speed as long as there’s no air resistance. If you’d never performed this experiment before, you’d probably expect the hammer to fall faster. You’d be surprised to find that the hammer and feather actually hit the ground at the same time. That Newton’s theory explains this surprising fact strongly supports his theory.
So the ability to explain surprising facts is a third virtue of a good explanation. And this virtue corresponds to our third term in Bayes’ theorem:
And since \(\p(E)\) is in the denominator of Bayes’ theorem, a smaller number there means a
bigger value for \(\p(H \given E)\). So the more surprising the finding \(E\) is, the more it supports a hypothesis \(H\) that explains it.
According to this analysis then, each term in Bayes’ theorem corresponds to a virtue of a good explanation. And that’s why Inference to the Best Explanation works as a form of inductive inference. |
First Order Transient and Steady State Analysis
concept
When analysing a control system we want to know how it responds to the inputs we're likely to give it when it's running. Unfortunately we often don't know ahead of time exactly what the inputs are going to be. For instance a thermostat system that uses a thermometer and a set ideal temperature to control an air conditioner is going to have all kinds of combinations of ambient temperature and ideal temperature to deal with. We can't possibly model every scenario ahead of time. So what do we do? When designing and analysing a control system we use "Test Signals" which represent different kinds of signals we're likely to encounter. Your typical test signals are an impulse pulse (like a delta function), a step function, a ramp and maybe a parabola. By observing how your system responds to these test inputs we can get a pretty good idea of how it'll work when given a complex signal out in the real world.
To analyse how your system responds to these test signals we look at two parts of its response, the transient response and the steady state response. The steady state response is what value your system eventually settles at, its final value. The transient response is everything your system does until it settles down.
fact
If a system has transient response \(c_{tr}(t)\) and steady state response \(c_{ss}(t)\) then its total response is given by: $$ c(t) = c_{tr}(t) + c_{ss}(t) $$
The most important part of a control system is whether or not it is stable. Stability means that your system will either settle on a value when not given any inputs, or will stay between two values when not given any inputs.
fact
A system can be unstable, stable or critically stable.
Unstable: The system's output goes to \(\infty\) or \(-\infty\) Stable: The system's output settles on a single value when not given a changing input Critically Stable: The system's output oscillates between two non-infinite bounds forever
If a system is shown to be stable (we'll get to how in a little bit), the next thing we care about is the relative stability and the steady-state error of the system.
fact
Relative stability is a measure of how "close" the system is to being unstable. Since real parts used to build a system have tolerances we want a large relative stability to know that even if a resistor is off by 10% our system will still be stable.
fact
The steady-state-error of a system is the difference between the input and the output as time goes to infinity.
example
Find the steady state error of the system shown below: The red line is the input and the blue line is the system output
We can see that the output settles at 0.8 while the input holds steady at 1 so our steady state error is: \(e_{ss} = 1 - 0.8 = 0.2\)
It's possible to have a finite steady state error even when neither the input nor the output settle to any value, so long as their different converges to some number as time goes to infinity
example
Find the steady state error of the system shown below: The purple line is the input and the yellow line is the system output
We can see that the input and output are parallel lines so their difference is going to remain the same out to infinity. \(e_{ss} = \text{input} - \text{output} = 0.1\)
Now we're going to see how to actually analyse first order systems by finding their ouputs when given some of these typical test signals. In all the discussion that follows remember that the output of a system is given by \(C(s)\) and the input is \(R(s)\) making the transfer function \(\frac{C(s)}{R(s)}\)
fact
A first order system is one whose input-output equation is a first order differential equation
First order systems are the simplest of the system's we'll be looking at and serve a great point of introduction into some computational techniques we use in more complex systems.
fact
Given a first order system with transfer function \(H(s) = \frac{C(s)}{R(s)}\) its output when given input \(R(s)\) must be: $$C(s) = H(s)R(s)$$
This is just a simple bit of algebra but since systems are usually given by their transfer function it pays to keep it in mind.
example
Find the step response of the first order system with transfer function \(H(s) = \frac{1}{Ts + 1}\)
The Laplace transform of the step function is \(\frac{1}{s}\) so our output is given by: \(C(s) = H(s)\cdot \frac{1}{s} = \frac{1}{Ts + 1}\cdot \frac{1}{s}\) Expand by partial fractions and simplify to get: \(C(s) = \frac{1}{s} - \frac{1}{s + (1/T)}\) Inverse Laplace to find: \(c(t) = 1 - e^{-\frac{t}{T}}\) You might recognise this form from capacitors and inductors charging and dischargin. In fact the same rules apply with \(T\) as the time constant (63.2% of final value in one time consant, 99.3% of final value after 5 time constants etc).
The process of getting a system's output given its transfer function and input can seem a little daunting at first but it's nothing you haven't done before. If you're having trouble go back and study the laplace transform and partial fractions, then you'll have no trouble at all.
example
Find the unit ramp response of the first order system with transfer function \(\frac{C(s)}{R(s)} = \frac{1}{Ts + 1}\)
The unit ramp is a \(45^\circ\) straight line, its Laplace transform is \(\frac{1}{s^2}\) so: $$ C(s) = H(s)\cdot \frac{1}{s^2} $$ $$ C(s) = \frac{1}{Ts + 1}\cdot \frac{1}{s^2} $$ Partial fractions gives us: $$ C(s) = \frac{1}{s^2} - \frac{T}{s} + \frac{T^2}{Ts + 1} $$ And an inverse Laplace transform gives us: $$ c(t) = t - T + Te^{-t/T} $$ Now the error at any point in time is: $$ e(t) = r(t) - c(t) $$ $$ e(t) = T(1 - e^{-t/T}) $$ As \(t \to \infty, \quad e(t) = T\) So our steady state error is \(T\).
If you were paying very careful attention you may have noticed something interesting about the above results. The derivative of a ramp is a step and the output of the step was the derivative of the output of the ramp. Likewise an impulse is the derivative of a step and the output to the impulse was the derivative of the output to the step.
fact
The response to the derivative of an input signal is the differential of the response of the system to the original signal
example
The output of a system to a unit step input is \(c(t) = \ln(t)\). Find the output of the system to an impulse input.
We don't have the transfer function so we can't do this like the above examples. But using our last fact we know that the output to the impulse function will be the derivative of the output when given the step function. So \(c(t) = \frac{d}{dt}\ln(t)\) \(c(t) = \frac{1}{t}\) Look how easy that was. |
(a) Prove that $S=\{x^2\mid x\in G\}$ is a subgroup of $G$.
Consider the map $\phi:G \to G$ defined by $\phi(x)=x^2$ for $x\in G$.Then $\phi$ is a group homomorphism. In fact, for any $x, y \in G$, we have\begin{align*}\phi(xy)=(xy)^2=x^2y^2=\phi(x)\phi(y)\end{align*}as $G$ is an abelian group.
By definition of $\phi$, the image is $\im(\phi)=S$.Since the image of a group homomorphism is a group, we conclude that $S$ is a subgroup of $G$.
(b) Determine the index $[G : S]$.
By the first isomorphism theorem, we have\[G/\ker(\phi)\cong S.\]
If $x\in \ker(\phi)$, then $x^2=1$.It follows that $(x-1)(x+1)=0$ in $\Zmod{p}$.Since $\Zmod{p}$ is an integral domain, it follows that $x=\pm 1$ and $\ker(\phi)=\{\pm 1\}$.
Thus, $|S|=|G/\ker(\phi)|=(p-1)/2$ and hence the index is\[[G:S]=|G|/|S|=2.\]
(c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$.
Since $-1\notin S$ and $[G:S]=2$, we have the decomposition\[G=S\sqcup (-S).\]Suppose that an element $a$ in $G$ is not in $S$.
Then, we have $a\in -S$.Thus, there exists $b\in S$ such that $a=-b$.It follows that $-a=b\in S$. Therefore, we have either $a\in S$ or $-a\in S$.
Group Homomorphisms From Group of Order 21 to Group of Order 49Let $G$ be a finite group of order $21$ and let $K$ be a finite group of order $49$.Suppose that $G$ does not have a normal subgroup of order $3$.Then determine all group homomorphisms from $G$ to $K$.Proof.Let $e$ be the identity element of the group […]
Abelian Normal subgroup, Quotient Group, and Automorphism GroupLet $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.Let $\Aut(N)$ be the group of automorphisms of $G$.Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.Then prove that $N$ is contained in the center of […]
Subgroup of Finite Index Contains a Normal Subgroup of Finite IndexLet $G$ be a group and let $H$ be a subgroup of finite index. Then show that there exists a normal subgroup $N$ of $G$ such that $N$ is of finite index in $G$ and $N\subset H$.Proof.The group $G$ acts on the set of left cosets $G/H$ by left multiplication.Hence […]
Group Homomorphism, Preimage, and Product of GroupsLet $G, G'$ be groups and let $f:G \to G'$ be a group homomorphism.Put $N=\ker(f)$. Then show that we have\[f^{-1}(f(H))=HN.\]Proof.$(\subset)$ Take an arbitrary element $g\in f^{-1}(f(H))$. Then we have $f(g)\in f(H)$.It follows that there exists $h\in H$ […] |
Continuity of cost functional and optimal feedback controls for the stochastic Navier Stokes equation in 2D
Department of Mathematics, University of Southern California, Los Angeles, CA 90089, USA
We show the continuity of a specific cost functional $J(\phi) =\mathbb{E} \sup_{ t \in [0, T]}(\varphi(\mathcal{L}[t, u_\phi(t), \phi(t)]))$ of the SNSE in 2D on an open bounded nonperiodic domain $\mathcal{O}$ with respect to a special set of feedback controls $\{\phi_n\}_{n \geq 0}$, where $\varphi(x) =\log(1 + x)^{1-\epsilon}$ with $0 < \epsilon < 1$.
Mathematics Subject Classification:60H15, 60H30. Citation:Kerem Uǧurlu. Continuity of cost functional and optimal feedback controls for the stochastic Navier Stokes equation in 2D. Communications on Pure & Applied Analysis, 2017, 16 (1) : 189-208. doi: 10.3934/cpaa.2017009
References:
[1] [2] [3] [4]
J. Bricmont, A. Kupiainen and R. Lefevere,
Probabilistic estimates for the two-dimensional stochastic Navier-Stokes equations,
[5]
Z. Brzézniak and S. Peszat, Strong local and global solutions for stochastic Navier-Stokes equations,
[6] [7]
M. Capiński and D. Gatarek,
Stochastic equations in Hilbert space with application to NavierStokes equations in any dimension,
[8]
M. Capiński and S. Peszat,
Local existence and uniqueness of strong solutions to 3-D stochastic Navier-Stokes equations,
[9]
H. Choi, R. Temam, P. Moin and J. Kim,
Feedback control for unsteady flow and its application to the stochastic Burgers equation,
[10]
P. Constantin and C. Foias,
[11] [12] [13]
G. Da Prato and J. Zabczyk,
[14] [15]
R. Durrett, Probability: Theory and Examples, Cambridge University Press, Cambridge, 2013. Google Scholar
[16] [17]
F. Flandoli, An introduction to 3d stochastic fluid dynamics,
[18] [19] [20] [21]
A. V. Fursikov and MJ. Vishik,
[22]
D. Gatarek, Existence of optimal controls for stochastic evolution systems, in: G. Da Prato et al. ,
[23] [24]
N. Glatt-Holtz and V. Vicol,
Local and global existence of smooth solutions for the stochastic Euler equations with multiplicative noise,
[25] [26]
W. Grecksch,
[27]
S. B. Kuksin,
[28]
I. Kukavica, K. Uğurlu and M. Ziane, On the Galerkin approximation and norm estimates of the stochastic Navier-Stokes equations with multiplicative noise, submitted.Google Scholar
[29] [30]
J. C. Mattingly, The dissipative scale of the stochastics Navier-Stokes equation: regularization and analyticity,
[31] [32] [33] [34] [35] [36] [37]
C. Prévôt and M. Röckner,
[38]
A. Shirikyan,
[39]
S. S. Sritharan,
Deterministic and stochastic control of Navier-Stokes equation with linear,
[40]
D. W. Stroock and S. R. S. Varadhan,
[41] [42] [43]
M. Viot,
[44]
show all references
References:
[1] [2] [3] [4]
J. Bricmont, A. Kupiainen and R. Lefevere,
Probabilistic estimates for the two-dimensional stochastic Navier-Stokes equations,
[5]
Z. Brzézniak and S. Peszat, Strong local and global solutions for stochastic Navier-Stokes equations,
[6] [7]
M. Capiński and D. Gatarek,
Stochastic equations in Hilbert space with application to NavierStokes equations in any dimension,
[8]
M. Capiński and S. Peszat,
Local existence and uniqueness of strong solutions to 3-D stochastic Navier-Stokes equations,
[9]
H. Choi, R. Temam, P. Moin and J. Kim,
Feedback control for unsteady flow and its application to the stochastic Burgers equation,
[10]
P. Constantin and C. Foias,
[11] [12] [13]
G. Da Prato and J. Zabczyk,
[14] [15]
R. Durrett, Probability: Theory and Examples, Cambridge University Press, Cambridge, 2013. Google Scholar
[16] [17]
F. Flandoli, An introduction to 3d stochastic fluid dynamics,
[18] [19] [20] [21]
A. V. Fursikov and MJ. Vishik,
[22]
D. Gatarek, Existence of optimal controls for stochastic evolution systems, in: G. Da Prato et al. ,
[23] [24]
N. Glatt-Holtz and V. Vicol,
Local and global existence of smooth solutions for the stochastic Euler equations with multiplicative noise,
[25] [26]
W. Grecksch,
[27]
S. B. Kuksin,
[28]
I. Kukavica, K. Uğurlu and M. Ziane, On the Galerkin approximation and norm estimates of the stochastic Navier-Stokes equations with multiplicative noise, submitted.Google Scholar
[29] [30]
J. C. Mattingly, The dissipative scale of the stochastics Navier-Stokes equation: regularization and analyticity,
[31] [32] [33] [34] [35] [36] [37]
C. Prévôt and M. Röckner,
[38]
A. Shirikyan,
[39]
S. S. Sritharan,
Deterministic and stochastic control of Navier-Stokes equation with linear,
[40]
D. W. Stroock and S. R. S. Varadhan,
[41] [42] [43]
M. Viot,
[44]
[1]
Enrique Fernández-Cara.
Motivation, analysis and control of the variable density Navier-Stokes equations.
[2]
Yuri Bakhtin.
Lyapunov exponents for stochastic differential equations with
infinite memory and application to stochastic Navier-Stokes
equations.
[3]
Hakima Bessaih, Benedetta Ferrario.
Statistical properties of stochastic 2D Navier-Stokes equations
from linear models.
[4]
Lihuai Du, Ting Zhang.
Local and global strong solution to the stochastic 3-D incompressible anisotropic Navier-Stokes equations.
[5]
Kumarasamy Sakthivel, Sivaguru S. Sritharan.
Martingale solutions for stochastic Navier-Stokes equations driven by Lévy noise.
[6]
Takeshi Taniguchi.
The existence and decay
estimates of the solutions to $3$D stochastic Navier-Stokes equations with
additive noise in an exterior domain.
[7]
G. Deugoué, T. Tachim Medjo.
The Stochastic 3D globally modified Navier-Stokes equations: Existence, uniqueness and asymptotic behavior.
[8]
Pavel I. Plotnikov, Jan Sokolowski.
Optimal shape control of airfoil in compressible gas flow governed by Navier-Stokes equations.
[9]
Huaiqiang Yu, Bin Liu.
Pontryagin's principle for local solutions of optimal control governed by the 2D Navier-Stokes equations with mixed control-state constraints.
[10] [11] [12]
Anna Amirdjanova, Jie Xiong.
Large deviation principle for a stochastic navier-Stokes equation in
its vorticity form for a two-dimensional incompressible flow.
[13]
Manil T. Mohan, Sivaguru S. Sritharan.
$\mathbb{L}^p-$solutions of the stochastic Navier-Stokes equations subject to Lévy noise with $\mathbb{L}^m(\mathbb{R}^m)$ initial data.
[14]
Gung-Min Gie, Makram Hamouda, Roger Temam.
Asymptotic analysis of the Navier-Stokes equations in a curved domain with a non-characteristic boundary.
[15] [16] [17] [18]
C. Foias, M. S Jolly, I. Kukavica, E. S. Titi.
The Lorenz equation as a metaphor for the Navier-Stokes equations.
[19] [20]
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
作者:B ANANTHANARAYAN , IRINEL CAPRINI , I SENTITEMSU IMSONG 来源:[J].Pramana(IF 0.562), 2012, Vol.79 (5), pp.1317-1320Springer 摘要:Abstract(#br)Analyticity and unitarity techniques are employed to estimate Taylor coefficients of the pion electromagnetic form factor at t = 0 by exploiting the recently evaluated two-pion contribution to the muon ( g − 2) and the phase of the pion electromagnetic form fac...
作者:Caidi Zhao , Bei Li 来源:[J].Electronic Journal of Differential Equations(IF 0.426), 2016, Vol.2016 (179,), pp.1-20DOAJ 摘要:We study the three-dimensional (3D) regularizedmagnetohydrodynamics (MHD) equations. Using the method of splittingof the asymptotic approximate solutions into higher and lowerFourier components, we prove that the global attractor of the 3Dregularized MHD equations consists of rea...
作者:GAUHAR ABBAS , B ANANTHANARAYAN , Irinel CAPRINI ... 来源:[J].Pramana(IF 0.562), 2012, Vol.79 (4), pp.891-894Springer 摘要:Abstract(#br)The Kπ form factors are investigated at low energies by the method of unitarity bounds adapted so as to include information on the phase and modulus along the elastic region of the unitarity cut. Using as input the values of the form factors at t = 0, and at the...
作者:Jinyi Sun , Minghua Yang , Shangbin Cui 来源:[J].Annali di Matematica Pura ed Applicata (1923 -)(IF 0.68), 2017, Vol.196 (4), pp.1203-1229Springer 摘要:We study the three-dimensional Navier–Stokes equations in the rotational framework. By using the Littlewood–Paley analysis technique and the dispersive estimates for the Coriolis linear group \(\{e^{\pm i\Omega t\frac{D_3}{|D|}}\}_{t\in \mathbb {R}}\) , we prove unique exist...
作者:A. Alexandrou Himonas , Henrik Kalisch , Sigmund Selberg 来源:[J].Nonlinear Analysis: Real World Applications(IF 2.201), 2017, Vol.38, pp.35-48Elsevier 摘要:Abstract(#br)Persistence of spatial analyticity is studied for periodic solutions of the dispersion-generalized KdV equation u t − | D x | α u x + u u x = 0 for α ≥ 2 . For a class of analytic initial data with a uniform radius of analyticity σ 0 > 0 , we obtain an asympt...
作者:Huijun He , Zhaoyang Yin 来源:[J].Journal of Differential Equations(IF 1.48), 2019, Vol.267 (4), pp.2531-2559Elsevier 摘要:Abstract(#br)In this paper, we mainly consider the Gevrey regularity and analyticity of the solution to a generalized two-component shallow water wave system with higher-order inertia operators, namely, m = ( 1 − ∂ x 2 ) s u with s > 1 . Firstly, we obtain the Gevrey regula...
作者:Rafael F. Barostichi , A. Alexandrou Himonas , Gerson Petronilho 来源:[J].Journal of Differential Equations(IF 1.48), 2017Elsevier 摘要:... The main assumptions are that the initial datum u 0 ( x ) is analytic on the line, it has uniform radius of analyticity r ( 0 ) > 0 , and is such that the McKean quantity m 0 ( x ) ≐ ( 1 − ∂ x 2 ) u 0 ( x ) does not change sign. Furthermore, an explicit ...
我们正在为您处理中,这可能需要一些时间,请稍等。 |
№ 8
All Issues Volume 55, № 9, 2003
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1155-1166
We prove the holomorphy of a function that, at every point, preserves either angles or dilations with respect to a certain set.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1167-1177
In the case of approximation of periodic functions in the space $S^p$, we determine the exact constants in Jackson-type inequalities for the Zygmund, Rogosinski, and de la Valleé Poussin linear summation methods.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1178-1195
We investigate the Gauss variational problem over fairly general classes of Radon measures in a locally compact space
X. We describe potentials of minimizing measures, establish their characteristic properties, and prove the continuity of extremals. Extremal problems dual to the original one are formulated and solved. The results obtained are new even in the case of classical kernels and the Euclidean space \(\mathbb{R}^n \) . Almost-Everywhere Convergence and ( o)-Convergence in Rings of Measurable Operators Associated with a Finite von Neumann Algebra
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1196-1205
We study the relationship between (
o)-convergence and almost-everywhere convergence in the Hermite part of the ring of unbounded measurable operators associated with a finite von Neumann algebra. In particular, we prove a theorem according to which ( o)-convergence and almost-everywhere convergence are equivalent if and only if the von Neumann algebra is of the type I.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1206-1217
We prove generalized Noether theorems for a singular integral equation with Cauchy kernel on a closed rectifiable Jordan curve in classes of piecewise-continuous functions with oscillation-type discontinuities. We obtain results concerning the normal solvability of operators associated with the equation and acting into a Banach space and incomplete normed spaces of piecewise-continuous oscillating functions.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1218-1223
We propose a method for the factorization of algebraic polynomials with real or complex coefficients and construct a numerical algorithm, which, along with the factorization of a polynomial with multiple roots, solves the problem of the determination of multiplicities and the number of multiple roots of the polynomial.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1224-1237
We consider algebras generated by idempotents in Banach spaces and orthoprojectors in Hilbert spaces whose sum is a multiple of the identity. We construct several functors generated by homomorphisms of the algebras considered between categories of representations. We investigate properties of these functors and present their applications.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1238-1248
We show that the geometric structure of an arbitrarily curved Riemannian space is locally determined by a deformed group of its diffeomorphisms.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1249-1253
We prove that any irreducible faithful representation of an almost torsion-free Abelian group
G of finite rank over a finitely generated field of characteristic zero is induced from an irreducible representation of a finitely generated subgroup of the group G.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1254-1259
We describe sequences of zeros of functions
f ≢ 0 analytic in the half-plane \({\mathbb{C}}_ + = \{ z:\operatorname{Re} z >0\}\) and satisfying the condition \((\exists {\tau}_1 \in (0;1))(\exists c_1 >0)(\forall z \in {\mathbb{C}}_ + ):|f(z)| \leqslant c_1 \exp ({\eta}^{\tau }_1 (c_1 |z|)),\) where η: [0; +∞) → (0; +∞) is an increasing function such that the function ln η( r) is convex with respect to ln r on [1; +∞).
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1260-1268
We study the structure of the distribution of a random variable such that the elements of its continued fraction form a Markov chain of order
m. It is proved that an absolutely continuous component is absent from such distributions. On the Point Spectrum of Self-Adjoint Operators That Appears under Singular Perturbations of Finite Rank
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1269-1276
We discuss purely singular finite-rank perturbations of a self-adjoint operator
A in a Hilbert space ℋ. The perturbed operators \(\tilde A\) are defined by the Krein resolvent formula \((\tilde A - z)^{ - 1} = (A - z)^{ - 1} + B_z \) , Im z ≠ 0, where B z are finite-rank operators such that dom B z ∩ dom A = |0}. For an arbitrary system of orthonormal vectors \(\{ \psi _i \} _{i = 1}^{n < \infty } \) satisfying the condition span |ψ i A = |0} and an arbitrary collection of real numbers \({\lambda}_i \in {\mathbb{R}}^1\) , we construct an operator \(\tilde A\) that solves the eigenvalue problem \(\tilde A\psi _i = {\lambda}_i {\psi}_i , i = 1, \ldots ,n\) . We prove the uniqueness of \(\tilde A\) under the condition that rank B z = n.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1277-1283
Up to unitary equivalence, we describe all irreducible triples of self-adjoint operators
A 1, A 2, A 3 such that σ( A i) ⊂ |−1, 0, 1}, i = 1, 2, 3, and A 1 + A 2 + A 3 = 0.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1284-1290
In commutative associative third-rank algebras with principal identity over a complex field, we select bases such that hypercomplex monogenic functions constructed in these bases have components satisfying the three-dimensional Laplace equation. The notion of monogeneity for these functions is similar to the notion of monogeneity in the complex plane.
Ukr. Mat. Zh. - 2003. - 55, № 9. - pp. 1291-1296
We construct a multidimensional generalized diffusion process with the drift coefficient that is the (generalized) derivative of a vector-valued measure satisfying an analog of the Hölder condition with respect to volume. We prove the existence and continuity of the density of transition probability of this process and obtain standard estimates for this density. We also prove that the trajectories of the process are solutions of a stochastic differential equation. |
Consider the action of the group $G$ on the left cosets $G/H$ by left multiplication.
Proof.
Let $H$ be a subgroup of index $p$.Then the group $G$ acts on the left cosets $G/H$ by left multiplication.
It induces the permutation representation $\rho: G \to S_p$.
Let $K=\ker \rho$ be the kernel of $\rho$.Since $kH=H$ for $k\in K$, we have $K\subset H$.Let $[H:K]=m$.
By the first isomorphism theorem, the quotient group $G/K$ is isomorphic to the subgroup of $S_p$, thus $[G:K]$ divides $|S_p|=p!$ by Lagrange’s theorem.Since $[G:K]=[G:H][H:K]=pm$, we have $pm|p!$ and hence $m|(p-1)!$.
If $m$ has a prime factor $q$, then $q\geq p$ since the minimality of $p$ but the factors of $(p-1)!$ are only prime numbers less than $p$.Thus $m|(p-1)!$ implies that $m=1$, hence $H=K$. Therefore $H$ is normal since a kernel is always normal.
Subgroup of Finite Index Contains a Normal Subgroup of Finite IndexLet $G$ be a group and let $H$ be a subgroup of finite index. Then show that there exists a normal subgroup $N$ of $G$ such that $N$ is of finite index in $G$ and $N\subset H$.Proof.The group $G$ acts on the set of left cosets $G/H$ by left multiplication.Hence […]
Nontrivial Action of a Simple Group on a Finite SetLet $G$ be a simple group and let $X$ be a finite set.Suppose $G$ acts nontrivially on $X$. That is, there exist $g\in G$ and $x \in X$ such that $g\cdot x \neq x$.Then show that $G$ is a finite group and the order of $G$ divides $|X|!$.Proof.Since $G$ acts on $X$, it […]
Normal Subgroups, Isomorphic Quotients, But Not IsomorphicLet $G$ be a group. Suppose that $H_1, H_2, N_1, N_2$ are all normal subgroup of $G$, $H_1 \lhd N_2$, and $H_2 \lhd N_2$.Suppose also that $N_1/H_1$ is isomorphic to $N_2/H_2$. Then prove or disprove that $N_1$ is isomorphic to $N_2$.Proof.We give a […]
Normal Subgroup Whose Order is Relatively Prime to Its IndexLet $G$ be a finite group and let $N$ be a normal subgroup of $G$.Suppose that the order $n$ of $N$ is relatively prime to the index $|G:N|=m$.(a) Prove that $N=\{a\in G \mid a^n=e\}$.(b) Prove that $N=\{b^m \mid b\in G\}$.Proof.Note that as $n$ and […]
Subgroup Containing All $p$-Sylow Subgroups of a GroupSuppose that $G$ is a finite group of order $p^an$, where $p$ is a prime number and $p$ does not divide $n$.Let $N$ be a normal subgroup of $G$ such that the index $|G: N|$ is relatively prime to $p$.Then show that $N$ contains all $p$-Sylow subgroups of […]
Abelian Normal subgroup, Quotient Group, and Automorphism GroupLet $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.Let $\Aut(N)$ be the group of automorphisms of $G$.Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.Then prove that $N$ is contained in the center of […] |
Difference between revisions of "Probability Seminar"
(→Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto))
(→Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto))
Line 121: Line 121:
</span></b>
</span></b>
</div>
</div>
−
Title: The directed landscape
+
Title: The directed landscape
Abstract: I will describe the construction of the full scaling limit of (Brownian) last passage percolation: the directed landscape. The directed landscape can be thought of as a random scale-invariant `directed' metric on the plane, and last passage paths converge to directed geodesics in this metric. The directed landscape is expected to be a universal scaling limit for general last passage and random growth models (i.e. TASEP, the KPZ equation, the longest increasing subsequence in a random permutation). Joint work with Janosch Ormann and Balint Virag.
Abstract: I will describe the construction of the full scaling limit of (Brownian) last passage percolation: the directed landscape. The directed landscape can be thought of as a random scale-invariant `directed' metric on the plane, and last passage paths converge to directed geodesics in this metric. The directed landscape is expected to be a universal scaling limit for general last passage and random growth models (i.e. TASEP, the KPZ equation, the longest increasing subsequence in a random permutation). Joint work with Janosch Ormann and Balint Virag.
Revision as of 09:46, 1 May 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to [email protected]
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems.
Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto)
Title:
The directed landscape
Abstract: I will describe the construction of the full scaling limit of (Brownian) last passage percolation: the directed landscape. The directed landscape can be thought of as a random scale-invariant `directed' metric on the plane, and last passage paths converge to directed geodesics in this metric. The directed landscape is expected to be a universal scaling limit for general last passage and random growth models (i.e. TASEP, the KPZ equation, the longest increasing subsequence in a random permutation). Joint work with Janosch Ormann and Balint Virag. |
Let \( A \) be any real square matrix (not necessarily symmetric). Prove that: $$ (x'A x)^2 \leq (x'A A'x)(x'x) $$
The key point in proving this inequality is to recognize that \( x'A A'x \) can be expressed as vector norm of \( A'x \).
Proof:
If \( x=0 \), then the inequality is trival.
Suppose \( x \neq 0 \).
\( \frac{x'A x}{x'x}
= \frac{(A'x)'x}{\| x \|^2} = (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} \)
Because \( \frac{x}{\| x \|} \) is a unit vector, \( A'\frac{x}{\| x \|} \) can be considered as scale and rotation of \( \frac{x}{\| x \|} \) by \( A' \). Thus, the resulting vector norm of \( A'\frac{x}{\| x \|} \) is \( \alpha \) for some \( \alpha > 0 \). And \( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|}=\alpha \, cos(\beta) \) for some \( -\pi \leq \beta \leq \pi \), which is the angle between before and after premulitplying \( A' \).
Now:
\( ( \frac{x'A x}{x'x} )^2 \)
\(= ( (A'\frac{x}{\| x \|})'\frac{x}{\| x \|} )^2 \)
\( =\alpha^2 \, cos(\beta)^2 \)
\( \leq \alpha^2 \)
\(= (A'\frac{x}{\| x \|})'A'\frac{x}{\| x \|} \)
\(= \frac{(A'x)'A'x}{\| x \|^2} \)
\(= \frac{x'A A'x}{x'x} \)
Finally, multiplying both sides by \( (x'x)^2 \) completes the proof. |
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc. |
K Babu Joseph
Articles written in Pramana – Journal of Physics
Volume 4 Issue 1 January 1975 pp 1-18 Mechanics
A new analysis of the nature of the solutions of the Hamilton-Jacobi equation of classical dynamics is presented based on Caratheodory’s theorem concerning canonical transformations. The special role of a principal set of solutions is stressed, and the existence of analogous results in quantum mechanics is outlined.
Volume 9 Issue 2 August 1977 pp 103-109 Classical Mechanics
A straightforward derivation of the Dirac-Schwinger covariance condition is given within the framework of classical field theory. The crucial role of the energy continuity equation in the derivation is pointed out. The origin of higher order derivatives of delta function is traced to the presence of higher order derivatives of canonical coordinates and momenta in the energy density functional.
Volume 10 Issue 6 June 1978 pp 563-575 Nuclear And Particle Physics
A sine-Gordon type scalar field with a$$\cos ^4 [(\sqrt {\bar \lambda } \phi )/m]$$ potential in two dimensions is studied and a static,
Volume 11 Issue 1 July 1978 pp 17-26
Two dimensional sine-Gordon (SG) field theory on a lattice is studied using the single-site basis variational method of Drell and others. The nature of the phase transition associated with the spontaneous symmetry breakdown in a SG field system is clarified to be of second order. A generalisation is offered for a SG-type field theory in two dimensions with a potential of the from [cos
(√ n
Volume 11 Issue 2 August 1978 pp 195-204 Particle Physics
An investigation of the newly discovered charmed mesons
Volume 16 Issue 1 January 1981 pp 49-60 Particle Physics
We examine the consequences of a variable (density-dependent) bag pressure term and a fixed hadronic size in the phenomenological MIT bag model for hadron spectroscopy. Mass spectrum of the low-lying baryons and mesons, baryon magnetic moments and the hadron mass splittings are estimated. These are found to be in closer agreement with experiment than the MIT results.
Volume 19 Issue 1 July 1982 pp 99-102 Particle Physics
The suggestion made by Lipkin regarding the quark masses and magnetic moments of baryons is examined in the context of the variable pressure bag model. We find that the exact agreement obtained by Lipkin between theory and experiment in the case of
Volume 19 Issue 2 August 1982 pp 175-182 Nuclear And Particle Physics
In this paper we consider the experimentally observed dibaryons as six-quark states. The mass spectrum of
Volume 22 Issue 2 February 1984 pp 111-115 Quantum Mechanics
The nonlinear differential equation resulting from the use of the ’t Hooft-Corrigan-Fairlie-Wilczek ansatz in SU(2) Yang-Mills gauge theory is solved by the bilinear operator method. The solutions which are singular are interpreted as fluctuations involving no flux transport. However, these objects may play a tunnelling role similar to that of merons.
Volume 26 Issue 6 June 1986 pp 465-476
We discuss a perturbative scheme for the determination of the bifurcation rate δ for a specific map, by extending Virendra Singh’s method of evaluating the scaling factor α. The method is applied to a quartic map and the values obtained, α = 1.690781026 and δ = 7.23682924 are in good agreement with the numerically computed values reported in the literature. The perturbative approach is found to be more efficient than other existing methods.
Volume 31 Issue 1 July 1988 pp 1-8 Statistical Physics
The Melnikov-Holmes method is used to study the onset of chaos in a driven pendulum with nonlinear dissipation. Detailed numerical studies reveal many interesting features like a chaotic attractor at low frequencies, band formation near escape from the potential well and a sequence of subharmonic bifurcations inside the band that accumulates at the homoclinic bifurcation point.
Volume 39 Issue 3 September 1992 pp 193-252 Review
This paper is a review of the present status of studies relating to occurrence of deterministic chaos and its characterization in one-dimensional maps. As our primary aim is to introduce the nonspecialists into this fascinating world of chaos we start from very elementary concepts and give sufficient arguments for clarity of ideas. The two main scenarios during onset of chaos viz. the period doubling and intermittency are dealt with in detail. Although the logistic map is often discussed by way of illustration, a few more interesting maps are mentioned towards the end.
Volume 39 Issue 5 November 1992 pp 521-528 Research Articles
The first order perturbative correction to the energy levels of a boson realization of a
Volume 39 Issue 5 November 1992 pp 529-539 Research Articles
We present an analytic perturbative method for calculating
of the critical invariant circle of the polynomial circle map. The scaling behaviour is found to depend on q
Volume 42 Issue 4 April 1994 pp 285-297
Non perturbative analogues of the Gaussian effective potential (GEP) are defined for quantum oscillators obeying
EP) and the non perturbative q
Volume 42 Issue 4 April 1994 pp 299-309
A detailed physical characterisation of the coherent states and squeezed states of a real
Volume 45 Issue 4 October 1995 pp 311-317
A
Volume 46 Issue 4 April 1996 pp 305-314 Research Articles
A nonlinear quintic Schrödinger equation (NLQSE) is developed and studied in detail. It is found that the NLQSE has soliton solutions, the stability of which is analysed using variational method. It is also found that the soliton pulse width in the materials supporting NLQSE is small compared to soliton pulse width of the commonly studied nonlinear cubic Schrödinger equation (NLCSE).
Volume 49 Issue 6 December 1997 pp 591-601
The static effective potential for a scalar field with Φ
6 interaction is calculated using the effective action in Schrödinger picture formalism. It is found that the effective potential obtained is same as the Gaussian effective potential as far as static case is concerned. Equivalence with the CJT formalism can also be established. As in CJT formalism after renormalization an unrenormalized mass term persists. Nonzero turning points are obtained both for positive and negative
Volume 50 Issue 2 February 1998 pp 133-148
The finite temperature effective potential for a scalar field with Φ
6 interaction is calculated by extending the CJT formalism for composite operators. It is found that unrenormalized terms appear in the effective potential due to the presence of an unrenormalized mass term. Nonzero turning points are obtained both for positive and negative
Volume 56 Issue 4 April 2001 pp 477-486 Research Articles
We develop a stochastic formulation of cosmology in the early universe, after considering the scatter in the redshift-apparent magnitude diagram in the early epochs as an observational evidence for the non-deterministic evolution of early universe. We consider the stochastic evolution of density parameter in the early universe after the inflationary phase qualitatively, under the assumption of fluctuating
Current Issue
Volume 93 | Issue 5 November 2019
Click here for Editorial Note on CAP Mode |
R K Bhowmik
Articles written in Pramana – Journal of Physics
Volume 55 Issue 3 September 2000 pp L471-L478 Rapid Communication
Excited states of
63Cu were populated via the $^{52}{\rm Cr} + {}^{16}{\rm O}$ (65 MeV) reaction using the gamma detector array equipped with charged particle detector array for reaction channel separation. On the basis of $\gamma-\gamma$ coincidence relations and angular distribution ratios, a level scheme was constructed up to $E_{x} = 7$ MeV and $J^{\pi} = 23/2^{(+)}$. The decay scheme deduced was interpreted in terms of shell model calculations, with a restricted basis of the $f_{5/2}$, $p_{3/2}$, $p_{1/2}$, $g_{9/2}$ orbitals outside a $^{56}_{28}$Ni core.
Volume 75 Issue 2 August 2010 pp 317-331 Accelerators and Instrumentation for Nuclear Physics
N Madhavan S Nath T Varughese J Gehlot A Jhingan P Sugathan A K Sinha R Singh K M Varier M C Radhakrishna E Prasad S Kalkal G Mohanto J J Das Rakesh Kumar R P Singh S Muralithar R K Bhowmik A Roy Rajesh Kumar S K Suman A Mandal T S Datta J Chacko A Choudhury U G Naik A J Malyadri M Archunan J Zacharias S Rao Mukesh Kumar P Barua E T Subramanian K Rani B P Ajith Kumar K S Golda
Hybrid recoil mass analyzer (HYRA) is a unique, dual-mode spectrometer designed to carry out nuclear reaction and structure studies in heavy and medium-mass nuclei using gas-filled and vacuum modes, respectively and has the potential to address newer domains in nuclear physics accessible using high energy, heavy-ion beams from superconducting LINAC accelerator (being commissioned) and ECR-based high current injector system (planned) at IUAC. The first stage of HYRA is operational and initial experiments have been carried out using gas-filled mode for the detection of heavy evaporation residues and heavy quasielastic recoils in the direction of primary beam. Excellent primary beam rejection and transmission efficiency (comparable with other gas-filled separators) have been achieved using a smaller focal plane detection system. There are plans to couple HYRA to other detector arrays such as Indian national gamma array (INGA) and $4\pi$ spin spectrometer for ER tagged spectroscopic/spin distribution studies and for focal plane decay measurements.
Volume 79 Issue 3 September 2012 pp 403-415
High-spin states of
216Ra $(Z = 88,N = 128)$ have been investigated through 209Bi( 10B, 3n) reaction at an incident beam energy of 55 MeV and 209Bi( 11B, 4n) reaction at incident beam energies ranging from 65 to 78 MeV. Based on $\gamma \gamma$ coincidence data, the level scheme for 216Ra has been considerably extended up to $\sim 33\hbar$ spin and 7.2 MeV excitation energy in the present experiment with placement of 28 new 𝛾-transitions over what has been reported earlier. Tentative spin-parity assignments are done for the newly proposed levels on the basis of the DCO ratios corresponding to strong gates. Empirical shell model calculations were carried out to provide an understanding of the underlying nuclear structure.
Volume 82 Issue 4 April 2014 pp 683-696
Three different types of experiments have been performed to explore the complete and incomplete fusion dynamics in heavy-ion collisions. In this respect, first experiment for the measurement of excitation functions of the evaporation residues produced in the
20Ne+ 165Ho system at projectile energy ranges ≈2–8 MeV/nucleon has been done. Measured cumulative and direct crosssections have been compared with the theoretical model code PACE-2, which takes into account only the complete fusion process. It has been observed that, incomplete fusion fraction is sensitively dependent on projectile energy and mass asymmetry between the projectile and the target systems. Second experiment for measuring the forward recoil range distributions of the evaporation residues produced in the 20Ne+ 165Ho system at projectile energy ≈8MeV/nucleon has been done. It has been observed that, some evaporation residues have shown additional peaks in the measured forward recoil range distributions at cumulative thicknesses relatively smaller than the expected range of the residues produced via complete fusion. The results indicate the occurrence of incomplete fusion involving the breakup of 20Ne into 4He+ 16O and/or 8Be+ 12C followed by one of the fragments with target nucleus 165Ho. Third experiment for the measurement of spin distribution of the evaporation residues produced in the 16O+ 124Sn system at projectile energy ≈6 MeV/nucleon, showed that the residues produced as incomplete fusion products associated with fast 𝛼 and 2𝛼-emission channels observed in the forward cone, are found to be distinctly different from those of the residues produced as complete fusion products. The spin distribution of the evaporation residues also inferred that in incomplete fusion reaction channels input angular momentum ($J_0$) increases with fusion incompleteness when compared to complete fusion reaction channels. Present observation clearly shows that the production of fast forward 𝛼-particles arises from relatively larger angular momentum in the entrance channel leading to peripheral collision.
Volume 83 Issue 5 November 2014 pp 807-815
P Sugathan A Jhingan K S Golda T Varughese S Venkataramanan N Saneesh V V Satyanarayana S K Suman J Antony Ruby Shanti K Singh S K Saini A Gupta A Kothari P Barua Rajesh Kumar J Zacharias R P Singh B R Behera S K Mandal I M Govil R K Bhowmik
The characteristics and performance of the newly commissioned neutron detector array at IUAC are described. The array consists of 100 BC501 liquid scintillators mounted in a semispherical geometry and are kept at a distance of 175 cm from the reaction point. Each detector is a $5''\times 5''$ cylindrical cell coupled to $5''$ diameter photomultiplier tube (PMT). Signal processing is realized using custom-designed home-made integrated electronic modules which perform neutron–gamma discrimination using zero cross timing and time-of-flight (TOF) technique. Compact custom-built high voltage power supply developed using DC–DC converters are used to bias the detector. The neutrons are recorded in coincidence with fission fragments which are detected using multi-wire proportional counters mounted inside a 1m diameter SS target chamber. The detectors and electronics have been tested off-line using radioactive sources and the results are presented.
Current Issue
Volume 93 | Issue 5 November 2019
Click here for Editorial Note on CAP Mode |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.