text
stringlengths
256
16.4k
Definition:Internal Direct Product Definition where $\circ {\restriction_{S_1}}, \circ {\restriction_{S_2}}$ are the operations induced by the restrictions of $\circ$ to $S_1, S_2$ respectively. The structure $\left({S, \circ}\right)$ is the internal direct product of $S_1$ and $S_2$ if the mapping: $C: S_1 \times S_2 \to S: C \left({\left({s_1, s_2}\right)}\right) = s_1 \circ s_2$ It can be seen that the mapping $C$ is the restriction of the mapping $\circ$ of $S \times S$ to the subset $S_1 \times S_2$. where $\circ {\restriction_{S_1}}, \ldots, \circ {\restriction_{S_n}}$ are the operations induced by the restrictions of $\circ$ to $S_1, \ldots, S_n$ respectively. The structure $\left({S, \circ}\right)$ is the internal direct product of $\left \langle {S_n} \right \rangle$ if the mapping: $\displaystyle C: \prod_{k \mathop = 1}^n S_k \to S: C \left({s_1, \ldots, s_n}\right) = \prod_{k \mathop = 1}^n s_k$ The set of algebraic substructures $\left({S_1, \circ {\restriction_{S_1}}}\right), \left({S_2, \circ {\restriction_{S_2}}}\right), \ldots, \left({S_n, \circ {\restriction_{S_n}}}\right)$ whose direct product is isomorphic with $\left({S, \circ}\right)$ is called a decomposition of $S$. Also known as Some authors call this just the direct product. Some authors call it the direct composite. Also see Definition:External Direct Product Definition:Internal Group Direct Product Definition:Ring Direct Sum Results about internal direct productscan be found here.
Given languages $L_1,L_2$, defines $X(L_1,L_2)$ by $\qquad X(L_1,L_2) = \{w \mid w \not\in L_1 \cup L_2 \}$ If $L_1$ and $L_2$ are regular, how can we show that $X(L_1,L2)$ is also regular? Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community There are several ways to show that a language is regular (check the question How to prove a language is regular?) Specifically for the language in your question, start with DFAs for $L_1$ and $L_2$ and try to construct an NFA for $X(L_1,L_2)$ using them. More details below: Note that $X(L_1,L_2) = \overline{L_1} \cap \overline{L_2}$. From the DFA of $L_1$ construct the DFA of $\overline {L_1}$ (making any final state not-final, and vice-versa). Do the same for $L_2$. Intersection of regular languages can be constructed via product machine (see this question). [Of course, if you already know that a complement of a regular language is also regular, and so the intersection of two regular languages – you are done without constructing those DFAs..] Expanding on Zach's comment, you ( should) know the following things: Now you should be able to pick a few of these that combined make up your $X$ language function/operator/whatever you call something like that (well, it's just a language defined in terms of others). Just for some background, proofs for these properties can be found (IN A REALLY LARGE FONT) here. Give this a go, if you're really stuck, I'll put a bit more in the spoilers below (but with little explanation). $X(A_{1},A_{2}) = \overline{A_{1}\cup A_{2}} = \bar{A_{1}}\cap\bar{A_{2}}$.
2019-09-04 12:06 Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Detaljnije - Slični zapisi 2019-08-15 17:39 LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detaljnije - Slični zapisi 2019-08-15 17:36 Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detaljnije - Slični zapisi 2019-05-15 16:57 Detaljnije - Slični zapisi 2019-02-12 14:01 XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Detaljnije - Slični zapisi 2019-01-21 09:59 Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Detaljnije - Slični zapisi 2019-01-15 14:22 Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Detaljnije - Slični zapisi 2019-01-10 15:54 Detaljnije - Slični zapisi 2018-12-20 16:31 Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Detaljnije - Slični zapisi 2018-12-14 16:02 The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Detaljnije - Slični zapisi
I want to find $$\lim \limits_{x\to 0}{\sin{42x} \over \sin{6x}-\sin{7x}}$$ without resorting to L'Hôpital's rule. Numerically, this computes as $-42$. My idea is to examine two cases: $x>0$ and $x<0$ and use ${\sin{42x} \over \sin{6x}}\to 7$ and ${\sin{42x} \over \sin{7x}}\to 6$. I can't find the appropriate inequalities to use the squeeze theorem, though. Do you have suggestions? Hint: $$\frac{\sin42x}{\sin6x-\sin7x}=\cfrac{\frac{\sin42x}{42x}}{\frac17\frac{\sin6x}{6x}-\frac16\frac{\sin7x}{7x}}$$ I provide another approach which uses the simpler limit $\lim\limits_{x \to 0}\cos x = 1$ compared to $\lim\limits_{x \to 0}\dfrac{\sin x}{x} = 1$. Let $x = 2t$ and the given expression can be rewritten as $$-\frac{\sin 84t}{2\cos 13t\sin t}$$ and $\cos 13t \to 1$ therefore the desired limit is equal to limit of $-\dfrac{\sin 84t}{2\sin t}$ as $ t\to 0$. Now it is easy to prove via induction that $$\lim_{t\to 0}\frac{\sin nt}{\sin t} = n$$ for all positive integers $n$ and therefore the desired limit is $-84/2 = -42$. On request of user "Simple Art" (via comments) I show via induction that $$\lim_{t \to 0}\frac{\sin nt}{\sin t} = n\tag{1}$$ for all positive integers $n$. In what follows I will use the result that $\cos t \to 1$ as $t \to 0$ (and nothing more than that). For $n = 1$ we see that the claim holds. Let's suppose that it holds for $n = m$ so that $(\sin mt)/\sin t \to m$ as $t \to 0$. Now we can see that $$\frac{\sin (m + 1)t}{\sin t} = \frac{\sin mt}{\sin t}\cos t + \cos mt$$ and letting $t \to 0$ we see that $$\lim_{t \to 0}\frac{\sin(m + 1)t}{\sin t} = m \cdot 1 + 1 = m + 1$$ so that the claim holds for $n = m + 1$. Thus $(1)$ holds for all positive integers $n$. It is easy to extend the claim for all rational values of $n$. The whole point of the above gymnastics (as compared to the simpler and beautiful hint by "Simple Art") is to show that the current question can be solved by using a simpler limit $\cos t \to 1$ as $t \to 0$ instead of using the slightly more complicated limit $(\sin t)/t \to 1$ as $t \to 0$. Update: Once the limit formula $$\lim_{x\to 0}\frac{\sin nx} {\sin x} =n$$ is available for positive integer $n$, the current question is easily solved by dividing the numerator and denominator by $\sin x$ and then taking limit to get $42/(6-7)=-42$ as answer. I wonder why I converted the difference in denominator to a product. Perhaps achieving simplicity is not simple.
A disassembled U8 brushless motor The point of a motor is to transmit torque, so motor bearings play an integral role in taking on load and minimizing friction losses. This particular motor supports a max thrust of 2.6kg using two bearings with load ratings of 2070N. Guesstimating at moment resisted by the bearings... Max stall torque from the motor is 0.912 Nm, and the rotor shaft is pressed on with probably a 5µm tolerance. Assuming no axial loading on the bearings, each bearing experiences half this torque as a radial force = approx. 50N. We call the distance between bearings $a = 18mm$. F_bearing is the load on each bearing, which includes both applied loads and misalignment loads. By treating this shaft like a cantilever beam, we can calculate forces on the bearings due to misalignment: $F = kx$, and $F_{bearing}\cdot a = K_{moment}\cdot\alpha = \frac{2EI}{L}\cdot \frac{\delta_{tol}}{L}$. $F_{bearing} = \frac{2EI\delta_{tol}}{aL^2} + F_{applied}$ For this motor, L = 25mm, E = 69 GPa (material assumed to be 6061 AL), and I = 4019mm^4 $F_{misalignment}$ = 247N worst case. So each bearing experiences 300N at max. torque of the motor, or nominally 15% of load rating.
When modelling ARCH/GARCH effects, do we use excess returns? Is it common in the literature to use excess returns when modelling volatility as opposed to raw return data? Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community GARCH models have little to do with the economics of the data generating process of the series you model, so both returns and excess returns (and log-returns, and inflation-adjusted ones, even ones measured in yen!) are valid input. However, there is usually the conditional mean equation besides the variance equation in a GARCH set-up, and your risk-free perfectly predictable component would in this case be part of the conditional mean. You can have something like this: $$ r_{t+1} = r_{f,t} + \mu + \varepsilon_{t+1}, \\ \varepsilon_{t+1} \sim N(0, \sigma_{t+1}^2), \\ \sigma_{t+1}^2 = \alpha + \beta \sigma_t^2 + \gamma \varepsilon_t^2, $$ where the first equation is the mean equation, and you estimate $\{ \mu, \alpha, \beta, \gamma \}$. In this case, ignoring the risk-free rate $r_{f,t}$ would lead to erroneous estimates. But again, it's up to you to assume or not this holds.
Question: Integrate {eq}\int \frac{x+3}{\sqrt{x^{2}+6x}}dx {/eq} Substitution Rule We can perform a u-substitution to approximate integrals containing a function and its derivative. In this case, we assign an expression to u, and using derivatives, exchange dx for du. This derives directly from the chain rule and relates integrals to derivatives through antidifferentiation. Answer and Explanation: Here, we choose {eq}\displaystyle u = x^2 + 6x {/eq}. If we do this, we see that {eq}\displaystyle \begin{align*} \frac{du}{dx} &= 2x + 6 \\ \frac{du}{2} &= (x + 3) \ dx \end{align*} {/eq} Then we obtain {eq}\displaystyle \begin{align*} \int \frac{x+3}{\sqrt{x^{2}+6x}}dx &= \int \frac{du}{2\sqrt{u}} \\ &= \frac{1}{2}\int u^{-1/2} \ du \\ &= u^{1/2} + C \\ &= \sqrt{x^2 + 6x} + C \end{align*} {/eq} Become a member and unlock all Study Answers Try it risk-free for 30 daysTry it risk-free Ask a question Our experts can answer your tough homework and study questions.Ask a question Ask a question Search Answers Learn more about this topic: from Math 104: CalculusChapter 13 / Lesson 5
This set of Ordinary Differential Equations Multiple Choice Questions & Answers (MCQs) focuses on “Newton’s Law of Cooling and Escape Velocity”. 1. According to Newton’s law of cooling “The change of temperature of a body is proportional to the difference between the temperature of a body and that of the surrounding medium”. If t 1℃ is the initial temperature of the body and t 2℃ is the constant temperature of the medium, T℃ be the temperature of the body at any time t then find the expression for T℃ as a function of t 1℃, t 2℃ and time t. a) T=t 1+(t 2) e -kt b) T=t 2+(t 1-t 2) e -kt c) T=t 1+(t 1-t 2) e kt d) T=t 2+(t 1) e kt View Answer Explanation: According to the definition of Newton’s law of cooling \(\frac{dT}{dt} ∝ (T-t_2) \,or\, \frac{dT}{dt} = -k(T-t_2)\) ….k is a constant of proportionality and negative sign indicates the cooling of a body with increase of the time. since t 1℃ initial temperature of the body at t=0 T=t 1–> T(0) = t 1℃. \(\frac{dT}{dt} = -k(T-t_2)\)…….at T(0) = t 1℃, now solving DE the above equation is of variable separable form i.e \(\int \frac{dt}{T-t_2} = \int -kdt + c\) =log (T-t 2) = -kt + c –> T-t 2= pe -kt…where p=e c=constant, using initial condition i.e T(0)= t 1we get t 1-t 2=p substituting back in equation we obtain T=t 2+(t 1-t 2) e -kt. 2. A body in air at 25℃ cools from 100℃ to 75℃ in 1 minute. What is the temperature of the body at the end of 3 minutes? (Take log(1.5)=0.4) a) 40℃ b) 47.5℃ c) 42.5℃ d) 50℃ View Answer Explanation: By Newton’s law of cooling w.k.t T = t 2+ (t 1-t 2) e -kt, given t 1=100℃, t 2=25℃ when t=1 –> T(1) = 25 + 75 e -k= 75℃ –> 50/75 = 2/3 = e -k –> 3/2=e ktaking log k=log(1.5)=0.4. to find T when t=3 minute using the value of k we get T = 25 + 75e -0.4*3= 47.5℃……e -1.2=0.3. 3. A bottle of mineral water at a room temperature of 72℉ is kept in a refrigerator where the temperature is 44℉.After half an hour water cooled to 61℉.What is the temperature of the body in another half an hour?(Take log \(\frac{28}{17}\) = 0.498, e -0.99=0.37) a) 18℉ b) 9.4℉ c) 54.4℉ d) 36.4℉ View Answer Explanation: By Newton’s law of cooling w.k.t T=t 2+(t 1-t 2) e -kt, given t 1=72℉, t 2=44℉ At t=half an hour = 30mts T=61℉, finding k using the given values i.e 61=44+28e -k30–> \(\frac{17}{28}\) = e -k30or \(\frac{28}{17}\) = e k30taking log, log \(\frac{28}{17}\) = 30k –> k=0.0166 to find T when t = 30mts + 30mts = 60mts T = 44 + 28e -(0.0166)30= 54.4℉. 4. The radius of the moon is roughly 2000km. The acceleration of gravity at the surface of the moon is about \(\frac{g}{6}\), where g is the acceleration of gravity at the surface of the earth. What is the velocity of escape for the moon?(Take g=10ms -2) a) 2.58 kms -1 b) 4.58 kms -1 c) 6.28 kms -1 d) 12.28 kms -1 View Answer Explanation: Let R be radius of the earth and r be the variable distance from Newton’s law \(a = \frac{dv}{dt} = \frac{k}{r^2}\) when r=R a=-g due to retardation of a body -gR 2=k substituting the value of k back we get \(\frac{dv}{dt} = \frac{-gR^2}{r^2} = \frac{dr}{dt} \frac{dv}{dr} = v \frac{dv}{dr}\) solving DE for v we get \(\int v \,dv = \int \frac{-gR^2}{r^2} dr + c \rightarrow v^2 = \frac{2gR^2}{r} + C\) to find c we use at r=R v=v ethus we get \(v_e^2 – \frac{2gR^2}{R} = C\)… where 2c=C=constant substituting value of c we get \(v^2=\frac{2gR^2}{r} + v_e^2 – 2gR\)…..if r>>R \(\frac{2gR^2}{r} = 0\) and particle to get escape from earth v≥0 –> v e 2– 2gR≥0 –> \(v_e=\sqrt{2gR}\) to find v e from the moon g becomes \(\frac{g}{6}\), R=2000km=2×10 6m, g=10ms -2 therefore \(v_e = \sqrt{2*\frac{10}{6}(2×10^6)} = 2.58kms^{-1}.\). Sanfoundry Global Education & Learning Series – Ordinary Differential Equations. To practice all areas of Ordinary Differential Equations, here is complete set of 1000+ Multiple Choice Questions and Answers.
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Current browse context: physics.ins-det Change to browse by: Bookmark(what is this?) Physics > Instrumentation and Detectors Title: Measuring the Faraday effect in olive oil using permanent magnets and Malus' law (Submitted on 20 Aug 2019) Abstract: We present a simple permanent magnet set-up that can be used to measure the Faraday effect in gases, liquids and solids. By fitting the transmission curve as a function of polarizer angle (Malus' law) we average over fluctuations in the laser intensity and can extract phase shifts as small as $\pm$ 50 $\mu$rads. We have focused on measuring the Faraday effect in olive oil and find a Verdet coefficient of $V$ = 192 $\pm$ 1 deg T$^{-1}$ m$^{-1}$ at approximately 20 $^{\circ}$C for a wavelength of 659.2 nm. We show that the Verdet coefficient can be fit with a Drude-like dispersion law $A/(\lambda^2 - \lambda_0^2)$ with coefficients $A$ = 7.9 $\pm$ 0.2 $\times$ 10$^{7}$ deg T$^{-1}$ m$^{-1}$ nm$^2$ and $\lambda_0$ = 142 $\pm$ 13 nm. Submission historyFrom: Daniel Carr [view email] [v1]Tue, 20 Aug 2019 17:26:46 GMT (4836kb,D)
№ 8 All Issues Volume 63, № 4, 2011 Conditions of smoothness for the distribution density of a solution of a multidimensional linear stochastic differential equation with levy noise Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 435-447 A sufficient condition is obtained for smoothness of the density of distribution for a multidimensional Levy-driven Ornstein-Uhlenbeck process, i.e., a solution to a linear stochastic differential equation with Levy noise. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 448-458 An asymptotic formula is constructed for a mean value of the function $\overline{S}_k(n)$ which is dual to the Smarandache function $S_k(n)$. $O$- and $\Omega$-estimates for the second moment of the remainder term are obtained. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 459-465 For quaternionic-differentiable functions of a spatial variable, we prove a theorem on an integral over a closed surface. This theorem is an analog of the Cauchy theorem from complex analysis. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 466-471 We establish some criteria of convexity of compact sets in the Euclidean space. Analogs of these results are extended to complex and hypercomplex cases. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 472-480 Sufficient conditions of the existence of a nonnegative solution are obtained for an evolution inclusion of subdifferential type with multivalued non-Lipschitz perturbation. Under the additional condition of dissipativity, the existence of the global attractor in the class of nonnegative functions is proved. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 481-488 We consider $Q$-homeomorphisms with respect to the $p$-modulus. An estimate for a measure of a ball image is obtained under such mappings and the asymptotic behavior at zero is investigated. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 489-501 In the present paper, we introduce the sequence space $l^{\lambda}_p$ of non-absolute type which is a $p$-normed space and a $BK$-space in the cases of $0 < p < 1$ and $0 < p < 1$ i $1 \leq p < \infty$, respectively. Further, we derive some imbedding relations and construct the basis for the space $l^{\lambda}_p$, where $1 \leq p < \infty$. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 502-512 Let $R$ be a commutative ring with identity, $M$ an $R$-module and $K_1,..., K_n$ submodules of $M$. In this article, we construct an algebraic object, called product of $K_1,..., K_n$. We equipped this structure with appropriate operations to get an $R(M)$-module. It is shown that $R(M)$-module $M^n = M... M$ and $R$-module $M$ inherit some of the most important properties of each other. For example, we show that $M$ is a projective (flat) $R$-module if and only if $M^n$ is a projective (flat) $R(M)$-module. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 513-522 We consider resonance elliptic variational inequalities with second-order differential operators and discontinuous nonlinearity of linear grows. The theorem on the existence of a strong solution is obtained. The initial problem is reduced to the problem of the existence of a fixed point in a compact multivalued mapping and then, with the use of the Leray - Schauder method, the existence of the fixed point is established. On the reconstruction of the variation of the metric tensor of a surface on the basis of a given variation of christoffel symbols of the second kind under infinitesimal deformations of surfaces in the euclidean space $E_3$ Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 523-530 We investigate the problem of reconstruction of variation of a metric tensor of a surface on the basis of given variation of the sekond-kind Christoffel symbols for infinitesimal deformations of surfaces in the Euclidean space $E_3$. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 531-548 We solve the Landau - Kolmogorov problem for the class of functions absolutely monotone on a finite interval. For this class of functions, a new exact additive inequalities of the Kolmogorov type are obtained. Best $m$-term approximation of the classes $B ^{r}_{\infty, \theta}$ of functions of many variables by polynomials in the haar system Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 549-555 We obtain the exact-order estimate for the best $m$-term approximation of the classes $B ^{r}_{\infty, \theta}$ of periodic functions of many variables by polynomials with respect to the Haar system in the metric of the space $L_q,\quad 1 < q < \infty$. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 556-565 We prove some uniqueness theorems for algebraically nondegenerate holomorphic curves sharing hyper-surfaces ignoring multiplicity. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 566-571 We obtain necessary and sufficient conditions of the Fredholm properties and the formula for the calculation of index of a planar problem with shift and conjugation for a pair of functions. Ukr. Mat. Zh. - 2011. - 63, № 4. - pp. 572-577 In this paper we study transport processes in $\mathbb{R}^n,\quad n \geq 1$, having non-exponential distributed sojourn times or non-Markovian step durations. We use the idea that the probabilistic properties of a random vector are completely determined by those of its projection on a fixed line, and using this idea we avoid many of the difficulties appearing in the analysis of these problems in higher dimensions. As a particular case, we find the probability density function in three dimensions for 2-Erlang distributed sojourn times.
As a corollary to my other Question "French section numbering using bis, ter, etc", I am looking for a way to number equations by appending "bis," "ter," and other latin suffixes after the equation number. The figure that follows illustrates the desired output. Note the italic bis and ter in the equation numbers. I am able to accomplish this with the following MWE: \documentclass[letterpage,12pt]{book}\usepackage{geometry}\usepackage[utf8]{inputenc}\usepackage[T1]{fontenc}\usepackage{lmodern}\usepackage[hyphenation,parindent,lastparline]{impnattypo}\usepackage[all]{nowidow}\raggedbottom\usepackage{verbatim}\usepackage{amsmath,amsthm,amssymb}\usepackage[frenchb]{babel}\begin{document}Equation no. 1 to follow.\begin{equation}(\lambda + \mu) \frac{d\theta}{dx} + \mu\Delta_2u\end{equation}Equation no. 1 \textit{bis} to follow\begin{equation}\tag{1 \textit{bis}}\theta = \frac{du}{dx} + \frac{dv}{dy} + \frac{d\eta}{dz}\end{equation}Equation no. 2 to follow.\begin{equation}y = mx + b \\[0.5em]\end{equation}Equation no. 2 \textit{bis} to follow\begin{equation}\tag{2 \textit{bis}}a^2 + b^2 = c^2\end{equation}Equation no. 2 \textit{ter} to follow\begin{equation}\tag{2 \textit{ter}}E = mc^2\end{equation}Equation no. 3 to follow\begin{equation}e^{\pi i}=-1\end{equation}Equation no. 4 to follow\begin{equation}\cos^2{x} + \sin^2{x} = 1\end{equation}\end{document} However, this syntax lacks the automatic equation numbering that I would like to preserve. The MWE that follows is my non-working example that I am trying to edit to accomplish the desired output. \documentclass[letterpage,12pt]{book}\usepackage{geometry}\usepackage[utf8]{inputenc}\usepackage[T1]{fontenc}\usepackage{lmodern}\usepackage[hyphenation,parindent,lastparline]{impnattypo}\usepackage[all]{nowidow}\raggedbottom\usepackage{verbatim}\usepackage{amsmath,amsthm,amssymb}\usepackage[frenchb]{babel}\begin{document}Equation no. 1 to follow.\begin{equation}(\lambda + \mu) \frac{d\theta}{dx} + \mu\Delta_2u\end{equation}Equation no. 1 \textit{bis} to follow\begin{equation}\theta = \frac{du}{dx} + \frac{dv}{dy} + \frac{d\eta}{dz}\end{equation}Equation no. 2 to follow.\begin{equation}y = mx + b \\[0.5em]\end{equation}Equation no. 2 \textit{bis} to follow\begin{equation}a^2 + b^2 = c^2\end{equation}Equation no. 2 \textit{ter} to follow\begin{equation}E = mc^2\end{equation}Equation no. 3 to follow\begin{equation}e^{\pi i}=-1\end{equation}Equation no. 4 to follow\begin{equation}\cos^2{x} + \sin^2{x} = 1\end{equation}\end{document} This syntax numbers the second equation as (2) and in ascending order afterward, as one would expect. I will want to be able to cross-reference these equation numbers as well. How can I go about accomplishing my desired output?
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
№ 8 All Issues Volume 63, № 7, 2011 Ukr. Mat. Zh. - 2011. - 63, № 7. - pp. 867-879 We solve the extremal problem of finding the maximum of the functional. Ukr. Mat. Zh. - 2011. - 63, № 7. - pp. 880-879 The structure of nodal algebras over a complete discrete valuation ring with algebraically closed residue field is described. Ukr. Mat. Zh. - 2011. - 63, № 7. - pp. 889-903 For the problem of finding a relative Chebyshev point of a system of continuously varying (in the sense of the Hausdorff metric) bounded closed sets of a normed space linear over the field of complex numbers, we establish some existence and uniqueness theorems, necessary and sufficient conditions, and criteria for a relative Chebyshev point and describe properties of the extremal functional and the extremal operator. Ukr. Mat. Zh. - 2011. - 63, № 7. - pp. 904-923 We consider differential equations in a Banach space subjected to pulse influence at fixed times. It is assumed that a partial order is introduced in the Banach space with the use of a certain normal cone and that the differential equations are monotone with respect to initial data. We propose a new approach to the construction of comparison systems in a finite-dimensional space that does not involve auxiliary Lyapunov type functions. On the basis of this approach, we establish sufficient conditions for the stability of this class of differential equations in terms of two measures, choosing a certain Birkhoff measure as the measure of initial displacements, and the norm in the given Banach space as the measure of current displacements. We give some examples of investigation of impulsive systems of differential equations in critical cases and linear impulsive systems of partial differential equations. Existence criteria and asymptotics for some classes of solutions of essentially nonlinear second-order differential equations Ukr. Mat. Zh. - 2011. - 63, № 7. - pp. 924-938 We establish existence theorems and asymptotic representations for some classes of solutions of second-order differential equations whose right-hand sides contain nonlinearities of a more general form than nonlinearities of the Emden - Fowler type. Approximation of functions from the classes $C^{\psi}_{\beta, \infty}$ by biharmonic Poisson integrals Ukr. Mat. Zh. - 2011. - 63, № 7. - pp. 939-959 Asymptotic equalities are obtained for upper bounds of deviations of biharmonic Poisson integrals on the classes of $(\psi, \beta)$-differentiable periodic functions in the uniform metric. Ukr. Mat. Zh. - 2011. - 63, № 7. - pp. 960-968 We obtain necessary conditions for the convergence of multiple Fourier series of integrable functions in the mean. Sharp upper bounds of norms of functions and their derivatives on classes of functions with given comparison function Ukr. Mat. Zh. - 2011. - 63, № 7. - pp. 969-984 For arbitrary $[\alpha, \beta] \subset \textbf{R}$ and $p > 0$, we solve the extremal problem $$\int_{\alpha}^{\beta}|x^{(k)}(t)|^q dt \rightarrow \sup, \quad q \geq p, \quad k = 0, \quad \text{or} \quad q \geq 1, \quad k \geq 1,$$ on the set of functions $S^k_{\varphi}$ such that$\varphi ^{(i)}$ is the comparison function for $x^{(i)},\; i = 0, 1, . . . , k$, and (in the case $k = 0$) $L(x)_p \leq L(\varphi)_p$, where $$L(x)_p := \sup \left\{\left(\int^b_a|x(t)|^p dt \right)^{1/p}\; :\; a, b \in \textbf{R},\; |x(t)| > 0,\; t \in (a, b) \right\}$$ In particular, we solve this extremal problem for Sobolev classes and for bounded sets of the spaces of trigonometric polynomials and splines. Ukr. Mat. Zh. - 2011. - 63, № 7. - pp. 985-998 We introduce the notion of Volterra quadratic stochastic operators of a bisexual population. The description of the fixed points of Volterra quadratic stochastic operators of a bisexual population is reduced to the description of the fixed points of Volterra-type operators. Several Lyapunov functions are constructed for the Volterra quadratic stochastic operators of a bisexual population. By using these functions, we obtain an upper bound for the ω-limit set of trajectories. It is shown that the set of all Volterra quadratic stochastic operators of a bisexual population is a convex compact set, and the extreme points of this set are found. Volterra quadratic stochastic operators of a bisexual population that have a 2-periodic orbit (trajectory) are constructed. Ukr. Mat. Zh. - 2011. - 63, № 7. - pp. 999-1008 We investigate the Dirichlet weighted eigenvalue problem for a fourth-order elliptic operator with variable coefficients in a bounded domain in $R^n$. We establish a sharp inequality for its eigenvalues. It yields an estimate for the upper bound of the $(k + 1)$-th eigenvalue in terms of the first $k$ eigenvalues. Moreover, we also obtain estimates for some special cases of this problem. In particular, our results generalize the Wang -Xia inequality (J. Funct. Anal. - 2007. - 245) for the clamped plate problem to a fourth-order elliptic operator with variable coefficients.
Answer $$x=.78,3.22$$ Work Step by Step Using the quadratic formula, we obtain: $$x=\frac{-b\pm \sqrt{b^2-4ac}}{2a}$$ $$x=\frac{-8\pm \sqrt{(8)^2-4(-2)(-5)}}{2(-2)}$$ $$x=.78,3.22$$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Consider the following alternative definition of the derivative of a function $f:\mathbb R\to\mathbb R$ at a limit point $x$ of the domain of $f$: $$f'(x)=\lim_{x_1,x_2\to x}\frac{f(x_2)-f(x_1)}{x_2-x_1},$$ where $\lim_{x_1,x_2\to x}\frac{f(x_2)-f(x_1)}{x_2-x_1}$ is the $a\in\mathbb R$ such that for every $\epsilon>0$ there is a $\delta>0$ such that $|\frac{f(x_2)-f(x_1)}{x_2-x_1}-a|<\epsilon$ whenever $x_1$ and $x_2$ are in the domain of $f$, $x_1\neq x_2$, and $\max\{|x-x_1|,|x-x_2|\}<\delta$. What would happen to calculus if we replaced the usual definition with this one? Could the theory be developed in more or less the same way? Would all the major theorems still hold? Let me be clear that I am not asking whether this definition is strictly equivalent to the normal one. (In fact, I'm pretty sure it's not: I think that when under the normal definition $f'(x)$ is defined but $f'$ is discontinuous at $x$, $f'(x)$ is not defined under the alternative definition.) I'm asking something a little more vague: could we do pretty much the same thing with this definition?
This is a really nice question (+1)! It's a very beautiful result, despite of the a-bit-lengthy proof. So it really intrigues me, and I spent some time and found this paper gave answers to it: P. Rsenthal, The remarkable theorem of Levy and Steinitz, Amer. Math. Monthly 94 (1987) 342-351. According to the paper, the proof was made early 20th Century. The purpose of that paper was to "make this beautiful result more widely known". I put down the major part of the proof from the paper below, and anyone who's interested in details or extra discussions could check the paper directly. The proof is LONG, but each step itself is crystal clear; plus I put additional explanation wherever I feel needed. In sum, to answer the question asked here: the set of all sums of rearrangements of a given series of complex numbers is (1) the empty set, or (2) a single point, or (3) a line in the complex plane, or (4) the whole complex plane. This result is from the $\mathbb R^2$-case of The Levy-Steinitz Theorem, i.e. those are all a translate of a subspace of $\mathbb R^2$. Formal statement of THE LEVY-STEINITZ THEOREM: The set of all sums of rearrangements of a given series of vectors in a finite-dimensional real Euclidean space $\mathbb R^n$ is either the empty set or a translate of a subspace, i.e. a set of the form $v + M$, where $v$ is a given vector and $M$ is a linear subspace. Here comes the complete proof. Step 1: THE POLYGONAL CONFINEMENT THEOREM Polygonal Confinement Theorem: For each dimension n there is a constant $C_n$ s.t. whenever $\{v_i: i = 1, . . ., m \}$ is a finite family of vectors in $\mathbb R^n$ which sums to $0$ and satisfies $||v_i||<1\, \forall i$, there is a permutation $P$ of $(2,.. ., m)$ with the property that $${\Bigg\| v_1+\sum_{i=2}^j v_{P(i)} \Bigg\| \le C_n}$$ for every j. Moreover, we can take $C_1=1$ and $C_n \le \sqrt{4C_{n-1}^2+1},\, \forall\,n >1.$ Proof. The case n = 1 is easy. If, for example, $v_1 > 0$, we can choose $P(2)<0$, and keep choosing negative $v's$ until the sum of all the chosen vectors becomes negative. Then choose the next v to be positive, and keep choosing positive v's until the sum of all the chosen vectors becomes positive. Continue in this manner until all the v's are used. Since $||v_i|| \le 1$ for all i, it is clear that each partial sum in this arrangement is within distance $1$ of $0$. Hence, $C_1 = 1$. The general case is proven by induction. Assume that $n > 1$ and that $C_{n-1}$ is known to be finite, and consider a collection $\{v_i\}$ of vectors satisfying the hypotheses. Since $\{v_i\}$ is finite there are a finite number of possible partial sums of the $v's$ that begin with $v_1$; let $L$ be such a partial sum with maximal length among all such partial sums. Then $L = v_1 + u_1 + ... +u_s$, where $\{u_l,..., u_s\} \subset \{v_i\}$. Let $\{w_1,..., w_t\}$ denote the other $v's$, so that $L + w_1 + ... + w_t= 0$. We use the notation $(u|v)$ to denote the Euclidean inner product of $u$ and $v$. We begin with a proof that the $\{u_i\}$ point in the same general direction as $L$, while the $\{w_i\}$ point in the opposite direction. Claim (a): $(u_i|L) \ge 0,\, \forall i$.Suppose that $(u_i|L) < 0$, for some $i$. Then$$\Bigg( (L-u_i)\Bigg| \frac{L}{||L||} \Bigg)=||L||-\frac{1}{||L||}(u_i|L)>||L||$$So $||L-u_i||>||L||$, which contradicts the assumption that $L$ is a longest such partial sum. Claim (b): $(v_1|L) \ge 0$.For if $(v_1|L) < 0$, then$$\bigg(\frac{-L}{||L||} \bigg | (v_1+w_1+...+w_t) \bigg)=\bigg(\frac{-L}{||L||} \bigg | (v_1 - L) \bigg )= ||L||-\frac{1}{||L||}(L|v_1)>||L||$$so $v_1+w_1+...+w_t$ would be longer than $L$ - contradiction. Claim (c): $(w_i|L) \le 0$ for all i. For if there was an $i$ with $(w_i|L) > 0$, then$$\bigg((L+w_i)\bigg|\frac{L}{||L||}\bigg)=||L||+\frac{(w_i|L)}{||L||}>||L||$$Therefore, $||(L+w_i)||>||L||$. But $||L+w_i||$ is the length of a partial sum of the required kind. Thus this contradicts with $||L||$ being the longest. Now, we use the inductive hypothesis in the $(n - l)$-dimensional space.$$L^{\perp}=\{v \in \mathbb R^n:(v|L)=0\}$$ We let $v'$ denote the component of a vector $v$ in $L^{\perp}$, i.e.$$v'=v-\frac{(v|L)}{(L|L)}L$$Then $L=v_1+u_1+...+u_x$, implies $v_1'+u_1'+...+u_x'=0$. For a similar reason, $w_1'+...+w_t'=0$. By the induction hypothesis, there exists a permutation $Q$ of $(1,2,...,s)$ such that $$\bigg \| v_1' + \sum_{i=1}^{j}u{Q(i)}'\bigg \| \le C_{n-1},\, j = 1, 2, ..., s\,(*)$$and there exists a permutation $R$ of $(2,...,t)$ such that$$\bigg \| w_1'+\sum_{i=2}^{j}w_{R(i)'}\bigg \| \le C_{n-1 },\, j=2,3,...,t\, (**)$$ Define $R(1)=1$. Now the idea is to keep the above orders within the $u's$ and $w's$ (which willkeep the components in $L^{\perp}$ of partial sums from being too large) and alternately "feed in" $u's$ and $w's$ to keep the components along $L$ of length at most 1 (as in the proof of the case n = 1). More precisely, since $(v_1|L) \ge 0$ and $(w_i|L) \le 0$, we can choose a smallest $r$, say $r_1$, such that$$(v_1|L) + \sum_{i=1}^{r_1}(w_{R(i)}|L) \le 0$$Then choose a smallest $s_1$ such that$$(v_1|L) + \sum_{i=1}^{r_1}(w_{R(i)}|L) + \sum_{i=1}^{s_1}(w_{R(i)}|L) \le 0$$Then choose a smallest $r_2$ such that$$(v_1|L) + \sum_{i=1}^{r_1}(w_{R(i)}|L) + \sum_{i=1}^{s_1}(w_{R(i)}|L) + \sum_{i=r_1+1}^{r_2}(w_R(i)|L)\le 0$$And so on. Arrange the vectors $\{v_i\}$ in the order of $$(v_1, w_{R(1)},...,w_{R(r_1)}, u_{Q(1)},...,u_{Q(s_1)},w_{R(r_1+1)},...,w_{R(r_2)},...)$$In this arrangement, clearly the components along the direction of $L$ of each partial sum have norm at most $1$. The choice of the arrangements $Q$ and $R$ by the induction hypothesis insures that the components orthogonal to $L$ of the partial sums have norms at most $C_{n-1} + C_{n-1}$ (By *, and **). Hence, the norm of each partial sum is at most $\sqrt{(2C_{n-1})^2+1}$. Q.E.D. Step 2: THE REARRANGEMENT THEOREM First, the lemma below is a consequence of the Polygonal Confinement Theorem. Lemma 1. If $\{v_i: i = 1,...,m\}\subset \mathbb R^n$ and $||\sum_{i=1}^{m}v_i|| \le \epsilon$ for all $i$, then there is a permutation $P$ of $(1,2,...,m)$ such that $$||v_{P(1)}+v_{P(2)}+...+v_{P(r)}|| \le \epsilon(C_n + 1),\,1 \le r \le m$$ Proof. Define $v_{m+1}=-v_1-...-v_m$ so that $\sum_{i=1}^{m+1}v_i=0$. By the Polygonal Confinement Theorem, there is a permutation $P$ of (2,...,m+1) such that$$\bigg\| \frac{1}{\epsilon}v_1 + \sum_{i=2}^{r}\frac{1}{\epsilon}v_{P(i)} \bigg\|\le C_n$$for all r. Then $||v_1 + \sum_{i=1}^rv_{P(i)}|| \le \epsilon C_n$ for all r. Let $P(1)=1$. Now order the $\{v_i\}$ according to $P$, but omit $v_{m+1}$; since $||v_{m+1}|| \le \epsilon$ this omission changes the norms of the partial sums by at most $\epsilon$. Hence in this rearrangement, all the partial sums have norm at most $\epsilon C_n + \epsilon$. This proves the Lemma. The Rearrangement Theorem. In $\mathbb R^n$, if a subsequence of the sequence of partial sums of a series of vectors converges to $S$, and if the sequence of terms of the series converges to $0$, then there is a rearrangement of the series that sums to $S$. Proof. Let $\{v_i\}_{i=1}^{\infty}$ be a sequence of vectors in $\mathbb R^n$. For each m let $S_m=\sum_{i=1}^mv_i$. We assume that $\{S_{m_k}\} \rightarrow S$ for some subsequence $\{S_{m_k}\}$, and we must show how to rearrange the $\{v_i\}$ so that the entire sequence of partial sums converges to $S$. The ideas is to use Lemma 1 to obtain rearrangements of each of the families $(v_{m_k+1},...,v_{m_{k+1}-1})$ so that all the partial sums of these families are small. Then $S_m$ is close to $S_{m_k}$ if m is between $m_k$ and $m_{k+1}$. Let $\delta_k=||S_{m_k}-S||$; then $\{\delta_k\} \rightarrow 0$. Now$$\bigg\| \sum_{i=m_k+1}^{m_{k+1}-1}v_i \bigg\|=\bigg\| \sum_{i=1}^{m_{k+1}}v_i-\sum_{i=1}^{m_k}v_i-v_{m_{k+1}} \bigg\| < \delta_{k+1} + \delta_k + ||v_{m_{k+1}}||$$For each $k$ let $$\epsilon_k=max\{\delta_{k+1}+\delta_k, sup\{||v_i||: i \ge m_k\}\}$$Then $\{\epsilon_k\}\rightarrow 0$, and$$\bigg\| \sum_{i=m_k+1}^{m_{k+1}-1}v_i \bigg\| < 2 \epsilon_k$$ By Lemma 1, for each $k$ there is a permutation $P_k$ of $(m_k+1,...,m_{k+1}-1)$ such that$$\bigg\| \sum_{i=m_k+1}^{r}v_{P_k(i)} \bigg\| \le 2\epsilon_k(C_n+1)$$for $r=m_k+1,...,m_{k+1}-1$ Now arrange the $\{ v_i\}$ as follows. Keep $v_{m_k}$ in position $m_k$ for each $k$. Then order the $v_i$ for $(m_k + 1) \le i \le (m_{k+1}-1)$ according to $P_k$. In this arrangement, if $m_k+1 \le m \le m_{k+1}-1$ then $S_m - S_{m_k}$ is a sum of the form $\sum_{i=m_k+1}^{m}v_{P_k(i)}$ with $m<m_{k+1}$, and hence has norm at most $2\epsilon_k(C_n+1)$. Since $\{S_{m_k}\}\rightarrow S$ and $\{\epsilon_k\} \rightarrow 0$, it follows that $\{S_m\}\rightarrow S$. Q.E.D. Step 3 (Final Step): THE LEVY-STEINITZ THEOREM We need another consequence of the Polygonal Confinement Theorem as below. Lemma 2: If $\{v_i\}_{i=1}^m \subset \mathbb R^n,\, w=\sum_{i=1}^mv_i,\,0 < t < 1$, and $||v_i|| \le \epsilon$ for all $i$, then either $||v_1-tw|| \le \epsilon\sqrt{C_{n-1}^2+1}$ or there is a permutation $P$ of $(2,3,...,m)$ and an $r$ between $2$ and $m$ such that $||v_1+\sum_{i=2}^{r} v_{P(i)}-tw|| \le \epsilon\sqrt{C_{n-1}^2+1}$. (Actually the two scenarios could be unified as permutations of $\{v_{P(i)}\}_{i=1}^r$ ) Proof. Suppose $w \ne0$ (otherwise the result is trivial). Consider the case $n=1$. By multiplication through $-1$ if necessary, we can assume that $w>0$ (since $n=1$, $w$ is a real number); let $s$ denote the smallest $i$ such that$$v_1+v_2+...+v_i > tw$$Then since $$v_1+v_2+...+v_{s-1} \le tw$$and $|v_s|\le \epsilon$, it follows that$$|v_1+v_2+...+v_s-tw|\le \epsilon\,(***)$$ Thus in the case $n=1$, the Lemma holds with $C_{n-1}=C_0$ being defined to be $0$. Note also that, in the case $n=1$, no rearranging is necessary to get an appropriate partial sum. Now consider the general case of $\mathbb R^n$ for $n>1$. Since $w=\sum_{i=1}^m v_i$, the projections $\{v_i'\}$ of the $\{v_i\}$ onto $\{w\}^{\perp}$ add up to $0$. Since $||v_i||\le \epsilon$ for all $i$, the Polygonal Confinement Theorem yields a permutation $P$ of $(2,...m)$ such that$$\bigg\| \frac{1}{\epsilon}v_1' + \frac{1}{\epsilon}v_{P(2)}'+...+\frac{1}{\epsilon}v_{P(j)}' \bigg\| \le C_{n-1},\, j=2,3,...,m$$Also, $$\bigg ( v_1 \bigg | \frac{w}{||w||} \bigg )+\bigg ( v_{P(2)} \bigg | \frac{w}{||w||} \bigg )+...+\bigg ( v_{P(m)} \bigg | \frac{w}{||w||} \bigg )=||w||$$and $|\frac{(v_i|w)}{||w||}| \le \epsilon$ for all i. Hence, the case $n=1$ (***) yields an $r$ such that$$\bigg | \bigg ( v_1 \bigg | \frac{w}{||w||} \bigg )+\bigg ( v_{P(2)} \bigg | \frac{w}{||w||} \bigg )+...+\bigg ( v_{P(r)} \bigg | \frac{w}{||w||} \bigg ) - t||w|| \bigg | \le \epsilon$$ The bounds on the components on $w$ and $w^{\perp}$ (notice that $tw$ has nothing on $w^{\perp}$) yield a bound on the vector, so$$||v_1+v_{P(2)}+...+v_{P(r)}-tw||^2 \le \epsilon^2C_{n-1}^2 + \epsilon^2$$which is the Lemma. Q.E.D. Now we can finally prove the main theorem. The Levy-Steinitz Theorem. The set of all sums of rearrangements of a given series of vectors in $\mathbb R^n$ is either the empty set or a translate of a subspace. Proof. Let $S$ denote the set of all sums of convergent rearrangements of the series $\sum_{i=1}^{\infty}v_i$. We must show that S, adjusted by a vector, is a subspace. Suppose $S$ is not empty, then the series converges to some point(s), therefore $||v_i|| \rightarrow 0$. By replacing $v_1$ by $v_1 - v$, where $v$ is any element of $S$, we can assume that $0 \in S$. Next we show that if $0,\,s_1,\,s_2 \in S$, so is $s_1+s_2$. Let $\{\epsilon_m\}$ be a sequence of positive numbers that converges to $0$. Since an arrangement converges to $s_1$, there exists a finite set $I_1$ of positive integers such $1\in I_1$ and $||\sum_{i\in I_1}v_i-s_1||<\epsilon_1$. Since an arrangement converges to 0, there is a finite set $J_1 \supset I_1$ such that $||\sum_{i\in J_1}v_i - 0||<\epsilon_1$, and finite set $K_1 \supset J_1$ such that $||\sum_{i \in K_1}v_i - s_2|| < \epsilon_1$. There is also a finite set $I_2$ containing both $K_1$ and $\{2\}$ such that $||\sum_{i\in I_2}v_i-s_1||<\epsilon_2$. And so on. Note we were only talking about finite sum above, and finite sums are interchangeable, and thus we do not care the order of elements in the integer sets above. By above procedure, we inductively construct sets $I_m$, $J_m$, and $K_m$ of positive integers such that$$\{1,...,m-1\} \subset K_{m-1} \subset I_m \subset J_m \subset K_m,$$$$\bigg \| \sum_{i\in I_m}v_i - s_1\bigg \|<\epsilon_m,\,\, \bigg \| \sum_{i\in J_m}v_i - 0\bigg \|<\epsilon_m,\,\,\bigg \| \sum_{i\in K_m}v_i - s_2\bigg \|<\epsilon_m\,(****)$$For each $m$, starting at $m=1$, arrange the indices in $J_m$ so that those in $I_m$ come at the beginning, and then arrange the indices in $K_m$ so that those in $J_m$ come at the beginning. Then arrange the indices of $I_{m+1}$ so that those of $K_m$ come at the beginning. Thus there is a permutation $P$ of the set of positive integers and increasing sequences $\{i_m\},\{j_m\},\{k_m\}$ such that $i_m < j_m < k_m < i_{m+1}$, and$$\bigg \| \sum_{i=1}^{i_m}v_{P(i)} - s_1\bigg \|<\epsilon_m,\,\, \bigg \| \sum_{j=1}^{j_m}v_{P(j)} - 0\bigg \|<\epsilon_m,\,\,\bigg \| \sum_{k=1}^{k_m}v_{P(k)} - s_2\bigg \|<\epsilon_m$$for each $m$. Note that$$\bigg \| \sum_{i=j_m+1}^{k_m}v_{P(i)} - s_2 \bigg \|=\bigg \| \sum_{i=1}^{k_m}v_{P(i)} - \sum_{j=1}^{j_m}v_{P(j)} - s_2 \bigg \| < \epsilon_m + \epsilon_m$$It follows that$$\bigg \| \sum_{i=1}^{i_m}v_{P(i)} + \sum_{i=j_m + 1}^{k_m}v_{P(i)}-(s_1 + s_2) \bigg \|<3\epsilon_m$$For each $m$, rearrange the vectors in $\{v_{P(i)}: i=i_m,...j_m,...,k_m\}$ by interchanging the vectors $\{v_{P(i)}: i = i_m+1,...,j_m\}$ with the vectors $\{v_{P(i)}: i=j_m+1,...,k_m\}$. In this new arrangement, the above shows that there is a subsequence of the sequence of partial sums that converges to $s_1+s_2$. Since we are assuming $S \ne \emptyset$, $\{v_{P(i)}\}\rightarrow 0$. So the Rearrangement Theorem implies that there is another arrangement that converges to $s_1+s_2$. Therefore, $(s_1+s_2) \in S$. It remains to be shown that $s \in S$ implies $ts \in S$ for all real number $t$. The additivity of $S$ implies this for $t$ as a positive integer, so it suffices to consider the cases $t \in (0,1)$ and $t=-1$. We start the the arrangement $P$ used above to show the additivity of $S$. Fix $t \in (0,1)$. As shown above,$$\bigg \| \sum_{i=j_m+1}^{k_m}v_{P(i)}-s_2 \bigg \|<2\epsilon_m$$for each m. Let $\delta_m=sup\{||v_{P(i)}||: i = j_m+1,...,k_m\},\,w=\sum_{i=j_m+1}^{k_m}v_{P(i)}$, notice $\delta_m \rightarrow 0$, as $m \rightarrow \infty$ and let$$u_m=\sum_{i=j_m+1}^{k_m}v_{P(i)}-s_2=w-s_2$$By Lemma 2, there is a permutation $Q_m$ of $\{P(j_m+1),...,P(k_m)\}$ and an $r_m$ so that$$\bigg \| \bigg (\sum_{i=j_m+1}^{r_m}v_{Q_m(P(i))} \bigg )-t(s_2+u_m) \bigg \| \le M\delta_m,\, M=\sqrt{C_{n-1}^2+1}$$Then $$\bigg \| \bigg( \sum_{i=j_m+1}^{r_m}v_{Q_m(P(i))} \bigg )-ts_2 \bigg \| \le M\delta_m + 2\epsilon_m$$Now $$ \bigg \| \sum_{i=1}^{j_m}v_{P(i)} + \sum_{i=j_m+1}^{r_m}v_{Q_m(P(i))} -ts_2 \bigg \| \le M\delta_m + 3\epsilon_m$$so in this arrangement, a subsequence of the sequence of partial sums converges to $ts_2$. The Rearrangement Theorem yields $ts_2 \in S$. Last but not the least, we need to show that $-s_2 \in S$. Notice that by (****)$$\bigg\| \bigg( \sum_{i=1}^{j_{m+1}}v_{P(i)} - 0 \bigg) - \bigg ( \sum_{i=1}^{k_m}v_{P(i)} - s_2\bigg ) \bigg\| =\bigg \| \sum_{i=1}^{j_{m+1}}v_{P(i)}-\sum_{i=1}^{k_m}v_{P(i)} - (0-s_2) \bigg \| < \epsilon_{m+1} + \epsilon_m$$So$$\bigg \| \sum_{i=k_m + 1}^{j_{m+1}}v_{P(i)} -(-s_2) \bigg \| < \epsilon_{m+1} + \epsilon_m$$ Then $$\bigg \| \sum_{j=1}^{j_m}v_{P(i)} + \sum_{i=k_m + 1}^{j_{m+1}}v_{P(i)} - (-s_2) \bigg \| < \epsilon_{m+1} + 2\epsilon_m$$thus, there is an arrangement with a subsequence of the sequence of partial sums converging to $(-s_2)$. By the Rearrangement Theorem, $(-s_2) \in S$. Q.E.D. Now, we are finally DONE. Thank you for reading the long way down and hope you enjoyed it - at least I did :)
Answer $$x=\frac{\pi }{2}+2\pi n,\:x=\frac{3\pi }{2}+2\pi n,\:x=\frac{\pi }{6}+\pi n$$ Work Step by Step We solve the equation using the properties of trigonometric functions. Note, there is a general solution since trigonometric identities go up and down and this can pass through a given value of y many times. Solving this, we find: $$\cos \left(x\right)\left(\sqrt{3}\tan \left(x\right)-1\right)=0 \\ x=\frac{\pi }{2}+2\pi n,\:x=\frac{3\pi }{2}+2\pi n,\:x=\frac{\pi }{6}+\pi n$$
In the late 1960's Penrose developed twistor theory, which (amongst other things) lead to an exceptional description for solutions to the wave equation on Minkowski space via the so-called Penrose transform; If \begin{equation}u(x,y,z,t) = \frac {1} {2 \pi i} \oint_{\Gamma \subset \mathbb{C} \mathbb{P}^{1}} f(-(x+iy) + \lambda (t-z), (t+z) + \lambda (-x + i y), \lambda ) d \lambda, \,\,\,\,\,\,\,\,\,\, (1)\end{equation} where $\Gamma \subset \mathbb{C} \mathbb{P}^{1}$ is a closed contour and $f$ is holomorphic on $\mathbb{C} \mathbb{P}^{1}$ except at some number of poles, then $u$ satisfies the Minkowski wave (Laplace-Beltrami) equation $\square_{\eta} u = 0$. I am aware that there is a number of works in the literature describing twistor theory on curved manifolds, but have not seen explicit constructions along the lines of (1) such that the function $u$ satisfies a wave equation of the form $\square_{g} u = 0$ for (Lorentzian) metric $\boldsymbol{g}$. Is it known how to $\textit{explicitly}$ construct contour integrals similar to $(1)$ for some class of metrics $\boldsymbol{g}$? What about when $\boldsymbol{g}$ is Einstein (e.g. Schwarzschild), in particular? Are there topological obstructions in spacetimes $I \times \Sigma$? What about de-Sitter space?This post imported from StackExchange MathOverflow at 2016-12-22 17:28 (UTC), posted by SE-user Arthur Suvorov
Let $(\ell^{\infty})'$ be the $\mathbb{F}$-vector space of linear and continuous (bounded) functionals $\ell^{\infty}\rightarrow \mathbb{F}$, where $\mathbb{F}$ is either $\mathbb{R}$ or $\mathbb{C}$ (but we can assume $\mathbb{F}=\mathbb{R}$, if needed) and $\ell^{\infty}$ has the sup norm $\parallel\cdot\parallel_{\infty}$. Let also $c$ be the subspace of $\ell^{\infty}$ consisting of convergent sequences. Then the limit functional $\lim\colon c\rightarrow \mathbb{F}$ sending a convergent sequence to its limit is a continuous, linear functional with operator norm $1$ ($c$ has the sup norm as well). I am asked to prove or disprove that there exist distinct elements $f,g\in(\ell^{\infty})'$ which extend the limit functional on $c$. I think the claim is true, but, up to now, I have been able to prove only the following fact (for $\mathbb{F}=\mathbb{R}$), using Hahn-Banach's extension Theorem: for every real number $\lambda$ with $-1\leq \lambda \leq 1$, there exists a linear extension $h_{\lambda}$ of the limit functional to the whole $\ell^{\infty}$ such that, for all $\alpha\in\mathbb{R}$ and any convergent sequence $x\in c$, if $y$ is the sequence $((-1)^{n})_{n\in\mathbb{N}}$, then $$h_{\lambda}(ay+x)=a\lambda +\lim(x)\leq \limsup(ay+x).$$ In particular, there are uncountably many linear extension of the limit functional. I can not prove that at least two of these are continuous though. Can someone help me solving this problem with a worked solution? (I have looked for Banach limits around, but I have not found an explicit proof of non uniqueness of such continuous extensions of the limit extension). Thanks in advance.
I'm a bit stuck with the tensor analysis for the following problem. It was just introduced to me and I've never seen this before. All I'm looking for in a place to start, because I'm unsure where to even begin with this. The distance squared between two infinitesimally close points in Cartesian coordinates is $ds^2 = dx_1^2 + dx_2^2 + dx_3^2$. So using the chain rule, the distance squared in general coordinates is \begin{align} ds^2 = \sum_{\alpha = 1}^3 \sum_{\beta = 1}^3 g_{\alpha \beta} du_{\alpha} du_{\beta} \end{align} where the metric tensor $g$ is \begin{align} g_{\alpha \beta} = \sum_{\mu = 1}^3 \dfrac{\partial x_{\mu}}{\partial u_\alpha}\dfrac{\partial x_{\mu}}{\partial u_\beta} \end{align} Note the metric tensor is a function of $u_1$, $u_2$, and $u_3$. Write the Lagrangian for these coordinates, calculate the conjugate momenta in terms of the velocities, and from these calculate the Hamiltonian. Your result should be \begin{align} H = \dfrac{1}{2m} \sum_{\alpha = 1}^3 \sum_{\beta = 1}^3 p_{\alpha} g_{\alpha \beta}^{-1} p_{\beta} \end{align} where $g^{-1}$ is the inverse (matrix) of $g$ I know what to do. i.e., find the Lagrangian, take $p_i = d\mathcal L / d \dot q$, and use $H = \sum_i \dot q_i p_i - \mathcal{L}$. I'm just unsure on how to do this with the tensors. I know the first step, where $$ \mathcal L = \dfrac{m}{2}\dfrac{ds^2}{dt} = \dfrac{m}{2} \sum_{\alpha = 1}^3 \sum_{\beta = 1}^3 g_{\alpha \beta} \dot u_{\alpha} \dot u_{\beta}$$ But that's as far as I can get with my math skills. Can someone point me in the right direction? Preferably to a source that does not use Einstein notation. Thank you!
Goniometry Comments Goniometry is the part of mathematics in which the so-called goniometric functions are studied, of which the most important are: sine, cosine and tangent. These functions can be defined in a purely geometric way as follows Figure: g044590a $$\sin\alpha=\frac{PQ}{OP},\quad\cos\alpha=\frac{OQ}{OP},\quad\tan\alpha=\frac{PQ}{OQ}.$$ It can be shown that for arbitrary real $x$ the following series expansions hold: $$\sin x=x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\dots,$$ $$\cos x=1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+\dots.$$ As these series are convergent for arbitrary complex numbers too, the said functions can be extended to the whole complex plane and be studied for their own sake without any geometric application. Important parts of goniometry are plane and spherical trigonometry. In plane trigonometry the main problem is to compute three of the six elements of a plane triangle (3 sides and 3 angles) if three of them are known. The object of spherical trigonometry is to study the properties of spherical triangles. Applications of these disciplines can be found in surveying and navigation. See also Inverse trigonometric functions. How to Cite This Entry: Goniometry. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Goniometry&oldid=43484
https://doi.org/10.1351/goldbook.RT07475 Parameter describing the time-dependence of the tumbling of a molecular entity in a medium of @V06627@ \(\eta\) as originally defined by @D01533@, and used by Perrin in the original development of the theories of rotational motion of fluorophores. Related to the rotational correlation time, \(\tau_{c}\), by \(\rho = 3\tau_{c}\). Thus, in the case of a spherically emitting species reorienting itself in a homogeneous fluid, \(\rho = 1/(6\, D_{r})\), with \(D_{r}\) the @R05411@. Note: Related to the rotational correlation time, \(\tau_{c}\), by \(\rho = 3\tau_{c}\). Thus, in the case of a spherically emitting species reorienting itself in a homogeneous fluid, \(\rho = 1/(6\, D_{r})\), with \(D_{r}\) the @R05411@.
Paraphrasing Griffith's: For some particle of mass m constrained to the x-axis subject to some force $F(x,t)=-∂V/∂x$, the program of classical mechanics is to determine the particle's position at any given time: $x(t)$. This is obtained via Newton's second law $F=ma$. $V(x)$ together with an initial condition determines $x(t)$. The program of quantum mechanics is to obtain the particle's wave function $\Psi(x,t)$, gotten from solving the Schrôdinger equation: $$i \hbar \frac{∂\Psi}{∂t} = -\frac{\hbar^2}{2m}\frac{∂^2\Psi}{∂x^2} + V\Psi .$$ This is a simple case, but it illustrates the program, and generalizes to multiple particles, 3 dimensions, spin, and magnetism easily. What is the equivalent program of quantum field theory? And also, what is the specific representation of the "state" within that program? For example, in quantum mechanics for 1 particle in 3 dimensions, excluding spin, $\Psi: R\times R^3 \rightarrow C $ subject to normalization constraints. Another property of the previous two programs is that it is immediately clear how the state variables evolve numerically over time (if not calculable). And for such a solution program, is there an algebraic derivation, the way the Galilean group provides such a derivation for the Schrôdinger equation in quantum mechanics? I'm aware of second quantization, and that particle number changes, and I've seen various Langrangians, but only for specific cases, and these are unsatisfying compared to the seemingly generic programs of other branches. An answer dependent on Hamiltonian mechanics, classical field theory, exterior calculus, or abstract algebra is fine. Edit: This is not a duplicate. I've seen the other question, and it's getting at how QFT differs from single-particle QM generally. I'm asking what is the specific solution program that is just generic enough to encompass all of quantum field theory, and incidentally the mathematical structure of the instances of the state variables in it, and also incidentally whether an algebraic derivation of the program exists.
I know you can't have work without any displacement, so I was kind of wondering as to what keeps, for example, a man on a jetpack, off the ground but with no more change in height from the initial height he was on? Is this still a form of energy or something else because if he burns fuel to keep himself off the ground, doesn't that mean energy is being used? A table can forever keep an apple "levitated" above the ground with it's normal force. That requires no energy. No work is done. A force does not spend energy to fight against another force. The force may cost energy to be produced, though. This is a separate issue. The jetpack spends fuel to produce an up-drift force, the human body spends nutrition to extend/contract muscles to produce the "holding"-force to hold a milk can, but the table spends nothing to produce it's normal force. The jetpack falls down after a while and you feel tired after a while, not because work was done on the objects, but because work was done inside those "machines" (jetpack and body) that produce the forces. The table never gets tired. It never spends any work. The issue is clearly not about holding anything. It takes no energy to hold stuff. You are correct that no work is done on the levitating man, if he undergoes no displacement. Work may be done inside the "machine" that produces the force, but that is internal. To hover in the air conservation of momentum dictates that to keep a 100 kg object hovering we must "throw" 100 kg down towards the earth at a velocity of 9.8 m/s for every second we want to hover. The kinetic energy required to accelerate 100 kg to 9.8 m/s is 4.8 kilojoules. So a propeller that grabs 100 kg of air per second would need 4.8 kilojoules per second or 4.8 kilowatts (a watt is joules per second). We could also propel twice the mass of air at half the speed to hover. Since kinetic energy is the square of the velocity propelling 200 kg down at 4.9 m/s would use 2.4 kilowatts or half the energy. So bigger is better and there is no theoretical limit to how low your energy consumption can go. Some sort of futurist tractor beam that can push or pull a very large mass of air would use almost no power. With our current technology and materials a very large open blade (i.e. a helicopter), large ducted fan or high-bypass turbofan are your best options as they will move the maximum amount of air with the minimum amount of energy. If you want a more traditional jet pack where all of the reaction mass is kept on board then you're getting into rocketry and don't care about energy efficiency. You just care about the specific impulse (energy density) of the fuel. Also, even with the best rocket fuels your maximum flight times will be measured in seconds. Your question is actually profound in a subtle way. The key to understanding this is that the man has a force being applied to him by gravity that is pulling him down. In order for him to stay aloft at a constant height, there must be a force that acts in the opposite direction and counteracts the force of gravity. In your example, that counteracting force is supplied by the jetpack. So the jetpack must continually produce an acceleration upwards of equivalent to the weight of the man (and the jetpack.) But why is that different than when the man is standing on the ground? The earth's gravity is still acting on you but you don't have to continually burn fuel to stay in place. Consider the same standing on a large spring. When he first get's on it, he will move towards the earth and compress the spring until it pushes back enough to stop his motion. The spring's upward force is being supplied by weight of the man in a reflective manner. Essentially the ground does the same thing. The elasticity of the surface creates a mechanical equilibrium. Newtonian models don't really describe how materials produce elastic force. It is simply assumed. The jetpack must create an upward force equal to that of gravity. For that reason, it expels some mass downward. Below, I calculate the work done by the jetpack assuming all mass expelled from it leaves at velocity $v$ which depends on the details of the jetpack's construction. Suppose the jetpack of mass $m(t)$, where $t$ is the time loses $\delta m(t)$ mass over a short time $\delta t$, then its momentum changes by $\delta p = \delta m \cdot v$, where $v(t)$ is the velocity by which matter is pushed down from the jetpack. To counterbalance the gravity, you have to have $m(t)g=-\frac{\delta m}{\delta t} v$, where the sign comes from the fact that you lose mass from the jetpack as you try levitating. In the limit of very short time, the equation becomes: $\dfrac{dm(t)}{m(t)}=-\dfrac{v}{g}dt$ This equation is solvable, and one can obtain the mass that needs to be lost to keep flying with the jetpack. Suppose you need to find the work done by flying it, that has to be equal to the kinetic energy of the expelled gas over some time. Since we assumed all mass is expelled at $v$ from the jetpack, the work should be $W(t)=\dfrac{(m_0-m(t))v^2}{2}$, with $m_0$ being the initial mass of man+jetpack. As @Steeven explains, no energy is required in principle. However, you will find that 'hovering' does require energy. How much depends on how you hover. The basics are very simple. Gravity exerts a constant force $F$ on the levitating object. To counteract that force $F$, you can either place the object on a table, or impart momentum on an external reaction mass like air (helicopter), or propel some of your own mass (rocket). The force generated by imparting momentum on an external reaction mass is $$ F \propto \dot{m}v$$ with $\dot{m}$ the reaction mass flow and $v$ the reaction mass velocity. To do this, you require a certain amount of power, $$ P \propto \dot{m}v^2$$ From this, it is immediately obvious that you want to have a very large mass flow and a very low reaction mass velocity. This is why helicopters are more efficient than jetpacks (and turbofans more efficient than turbojets). In rocket science, this still holds, but since you need to store all your mass $m$ on board before starting your hover, it is preferable to expend a lot of energy to minimise the mass flow. This is why jetpacks are still more preferable than rocket suits for hovering on Earth. protected by Qmechanic♦ May 31 '18 at 13:57 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Suppose we have relativistic system with spontaneously broken symmetry. For simplicity, let choose broken $U(1)$ symmetry: $$ L = \frac{1}{2}(\partial_{\mu}\varphi )^{2} - V(|\varphi |) , V(|\varphi | = \varphi_{0}) = 0 \qquad (1) $$ Let us parametrize Goldstone degree of freedom, $\varphi = \varphi_{0}e^{i\theta}$. Then instead of $(1)$ we will get $$ L = \frac{\varphi_{0}^{2}}{2}(\partial_{\mu}\theta )^{2} \qquad (2) $$ In fact, this leaves us with undetermined VEV of Goldstone degree of freedom. This VEV corresponds to the coherent state of $\theta$-particles with zero momentum. Suppose also that instead of $(1)$ we have the lagrangian with small explicit symmetry breaking term, which after substitution of anzats $\varphi = \varphi_{0}e^{i\theta}$ leads to appearance $-\frac{\varphi_{0}^{2}m^{2}}{2}\theta^2$ extra term in $(2)$: $$ L' = \frac{\varphi_{0}^{2}}{2}(\partial_{\mu}\theta )^{2} - \frac{\varphi_{0}^{2}m^{2} \theta^{2}}{2} \qquad (2') $$ The first question: in case of absense of explicit symmetry breaking, can we fix this VEV by some properties of underlying theory? Or it is completely undertermined and may take arbitrary values from $0$ to $2 \pi$? The second question: in case of presence of explicit symmetry breaking term, can we immediately state that VEV of $\theta$ field is zero due to presence of mass term in $(2')$ which makes zero contribution in classical energy of $\theta$ field only if $\rangle \theta \langle = 0$? The third extra question. Suppose that we know only effective theory with $\theta$ field and its interactions with other fields, and the lagrangian takes the form $$ L{''} = \frac{1}{2}(\partial_{\mu}\theta )^{2} - \sum_{i}\partial_{\mu}\theta J^{\mu}_{i}c_{i}, \qquad (3) $$ where $J_{\mu}^{i}$ denotes some vector or pseudovector currents, in dependence of what $\theta$ is. May we strictly state that due to scale invariant form of the lagrangian $(3)$ $\theta$ field arises in corresponding effective theory in a way nothing but as goldstone degree of freedom?
№ 8 All Issues Volume 64, № 2, 2012 Value-sharing problem for p-adic meromorphic functions and their difference operators and difference polynomials Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 147-164 We discuss the value-sharing problem, versions of the Hayman conjecture, and the uniqueness problem for p-adic meromorphic functions and their difference operators and difference polynomials. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 165-175 Let $R$ be a prime ring of characteristic not 2 and let $I$ be a nonzero right ideal of $R$. Let $U$ be the right Utumi quotient ring of $R$ and let $C$ be the center of $U$. If $G$ is a generalized derivation of $R$ such that $[[G(x), x], G(x)] = 0$ for all $x \in I$, then $R$ is commutative or there exist $a, b \in U$ such that $G(x) = ax + xb$ for all $x \in R$ and one of the following assertions is true: $$(1)\quad (a - \lambda)I = (0) = (b + \lambda)I \;\;\text{for some}\; \lambda \in C,$$ $$(2)\quad (a - \lambda)I = (0) \;\;\text{for some}\; \lambda \in C \;\;\text{and}\; b \in C.$$ Classification of finite commutative semigroups for which the inverse monoid of local automorphisms is permutable Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 176-184 We give a classification of finite commutative semigroups for which the inverse monoid of local automorphisms is permutable. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 185-199 We describe vector bundles over a class of noncommutative curves, namely, over noncommutative nodal curves of string type and of almost string type. We also prove that, in other cases, the classification of vector bundles over a noncommutative curve is a wild problem. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 199-209 We establish some new Agarwal – Pang-type inequalities involving second-order partial derivatives. Our results in special cases yield some of interrelated results and provide new estimates for inequalities of this type. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 210-217 Let $G$ be a finite group. The prime graph of $G$ is the graph $\Gamma(G)$ whose vertex set is the set $\Pi(G)$ of all prime divisors of the order $|G|$ and two distinct vertices $p$ and $q$ of which are adjacent by an edge if $G$ has an element of order $pq$. We prove that if $S$ denotes one of the simple groups $L_5(4)$ and $U_4(4)$ and if $G$ is a finite group with $\Gamma(G) = \Gamma(S)$, then $G$ has a $G$ normal subgroup $N$ such that $\Pi(N) \subseteq \{2, 3, 5\}$ and $\cfrac GN \cong S$. On equalities involving integrals of the logarithm of the Riemann ζ-function and equivalent to the Riemann hypothesis Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 218-228 Using the generalized Littlewood theorem about a contour integral involving the logarithm of an analytical function, we show how an infinite number of integral equalities involving integrals of the logarithm of the Riemann ζ-function and equivalent to the Riemann hypothesis can be established and present some of them as an example. It is shown that all earlier known equalities of this type, viz., the Wang equality, Volchkov equality, Balazard-Saias-Yor equality, and an equality established by one of the authors, are certain particular cases of our general approach. Investigation of solutions of boundary-value problems with essentially infinite-dimensional elliptic operator Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 229-236 We consider Dirichlet problems for the Poisson equation and linear and nonlinear equations with essentially infinite-dimensional elliptic operator (of the Laplace -Levy type). The continuous dependence of solutions on boundary values and sufficient conditions for increasing the smoothness of solutions are investigated. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 237-244 We propose an algorithm for the solution of the boundary-value problem $U(0,x) = u_0,\;\; U(t, 0) = u_1$ and the external boundary-value problem $U(0, x) = v_0, \;\;U(t, x) |_{\Gamma} = v_1, \;\; \lim_{||x||_H \rightarrow \infty} U(t, x) = v_2$ for the nonlinear hyperbolic equation $$\frac{\partial}{\partial t}\left[k(U(t,x))\frac{\partial U(t,x)}{\partial t}\right] = \Delta_L U(t,x)$$ with divergent part and infinite-dimensional Levy Laplacian $\Delta_L$. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 245-252 We give a necessary and sufficient condition for the inclusion of $\Lambda BV^{(p)}$ in the classes $H^q_{\omega}$. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 253-267 The problem of reducing polynomial matrices to the canonical form by using semiscalar equivalent transformations is studied. A class of polynomial matrices is singled out, for which the canonical form with respect to semiscalar equivalence is indicated. This form enables one to solve the classification problem for collections of matrices over a field up to similarity. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 268-274 Necessary and sufficient conditions for the controllability of solutions of linear inhomogeneous integral equations are obtained. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 275-276 Answering a question of Banakh and Lyaskovska, we prove that for an arbitrary countable infinite amenable group $G$ the ideal of sets having $\mu$-measure zero for every Banach measure $\mu$ on $G$ is an $F_{\sigma \delta}$ subset of $\{0,1\}^G$. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 277-282 A representation of solutions of a discontinuous integro-differential operator is obtained. The asymptotic behavior of the eigenvalues and eigenfunctions of this operator is described. Ukr. Mat. Zh. - 2012. - 64, № 2. - pp. 283-288 A square matrix is said to be diagonalizable if it is similar to a diagonal matrix. We establish necessary and sufficient conditions for the diagonalizability of matrices over a principal ideal domain.
InfoGAN 1 is a really cool GAN (generative adversarial network) 2variant. It is not only able to generate images, but can also learn meaningfullatent variables without any labels on the data. One example given in the paperis that when InfoGAN is trained on the MNIST handwritten digit dataset,variables representing the type of digit (0-9), the angle of the digit, and thethickness of the stroke are all inferred automatically. Here are some sample output images generated from myunofficial Torch implementation of InfoGAN.A 10-category salient variable was varied when generating these digits, shownhorizontally. Whilst not perfect, notice that a lot of the digits are separatedin a meaningful way without using any labels. The main InfoGAN proposal is to set aside part of the generator’s input $c$ as “salient”, in addition to the typical GAN noise input $z$. These salient latent variables can be sampled from any distribution you like - categorical, uniform, Gaussian, etc. The discriminator attempts to recover the salient variables by looking at the generated image, which is used to form a new “information regularization” term within the usual GAN objective function. So, instead of just outputting a prediction of whether the image is real or fake $\hat{y}$, the discriminator also has an output $\hat{c}$ for reconstructing $c$. The authors of the paper take an information theoretic approach to explaining InfoGAN. More specifically, they state that InfoGAN works by maximizing the mutual information between the salient variables \(c\) and the generated images \(x\). Let $C$ be a random variable representing the salient latent information and $X=G(C,Z)$ be a random variable representing the image produced by the generator from $C$ and noise $Z$. Using the definitions for mutual information, cross entropy, entropy, and conditional entropy, as well as the fact that KL divergence is non-negative, we get the following upper bound on mutual information: \begin{align*} I(C;X) &= H(C)-H(C|X)\\ &= H(C)-\mathbb{E}_{X}[H(C|X)]\\ &= H(C)-\mathbb{E}_{X}[H(p(\cdot|X),q(\cdot|X))-D_{KL}(p(\cdot|X)||q(\cdot|X))]\\ &= H(C)+\mathbb{E}_{X}[D_{KL}(p(\cdot|X)||q(\cdot|X))]-\mathbb{E}_{X}[H(p(\cdot|X),q(\cdot|X))]\\ &= H(C)+\mathbb{E}_{X}[D_{KL}(p(\cdot|X)||q(\cdot|X))]+\mathbb{E}_{X}[\mathbb{E}_{C|X}[\log q(C|X)]]\\ &= H(C)+\mathbb{E}_{X}[D_{KL}(p(\cdot|X)||q(\cdot|X))]+\mathbb{E}_{X,C}[\log q(C|X)]\\ & \ge H(C)+\mathbb{E}_{X,C}[\log q(C|X)]\\ \end{align*} This tells us that we can maximize the mutual information $I(C;X)$ by maximizing $\mathbb{E}_{X,C}[\log q(C|X)]$. That is, we want to minimize the negative log likelihood (NLL) of $q(C|X)$, our discriminator approximation of the true posterior distribution $p(C|X)$. In order to do this we need to be able to jointly sample from $C$ (easy since we defined the distribution of salient variables ourselves) and $X$ (also easy - generate noise to concatenate with the salient variables and do a forward pass through the generator). Just to reiterate, this is bog-standard GAN stuff - we just sample inputs to the generator and generate an image, which is exactly what you need to do with the usual GAN objective. The only complicated stuff left is calculating the NLL for the salient variables. One way to do this is by going deep into defining random distributions and how to calculate the NLL for each. This is done in OpenAI’s implementation of InfoGan (see distribution.py). In a framework like Torch this means defining a new criterion, which is certainly possible, and I have done this in my unofficial Torch implementation of InfoGAN (see the pdist folder). I believe that we can simply use tried and true criteria like nn.MSECriterion and nn.ClassNLLCriterion instead of diving deep into custom NLLmumbo-jumbo. In fact, I have tried this and the results appear just as good.Why does this work? Well, let’s consider some distributions we could select forsalient variables. We define $c$ to be the input salient variables, and $\hat{c}$ to be the output predictions from our $q(C|X)$ approximation. Here we have $n$ discrete variables, of which one is set to 1 and the rest are 0. This is so-called “one-hot encoding”, and is used for usual classification problems. This one is easy, since minimizing the NLL is what we do forclassification anyway - it even says so on the tin ( nn.ClassNLLCriterion). Justuse this, it’s the same. This one is a little trickier. First we will look at the definition for the NLL of a Gaussian: \[ \log p(c|\hat{c},\sigma^{2})=\frac{1}{2\sigma^{2}}(c-\hat{c})^{2}+\frac{N}{2}\log\sigma^{2}+\frac{N}{2}\log(2\pi) \] Notice anything interesting? The only term involving $c$s is squared error witha scaling factor in front. If we use a fixed standard deviation, then minimizingthe NLL of a Gaussian is equivalent to minimizing mean squared error. So we cansimply use nn.MSECriterion! The official InfoGAN implementation treats uniform distributions as Gaussians for everything but sampling, so we can once again use MSE. InfoGAN is really neat, and not nearly as difficult to implement as an initial read through of the paper might suggest. Essentially, we just need to make three adjustments to a regular GAN:
I have a very silly question about an inequality. Let $n_x \geq 1$, integer, intuitively there's a unique integer $N_x \geq 1$ such that $$ 32N_x - 31 \leq n_x \leq 32N_x $$ However I don't know why I'm struggling to prove rigorously that the statement is true. My attempt was to define the function $$ N_x = N_x(n_x) = \left\lfloor \frac{n_x+31}{32} \right\rfloor $$ By the monotonicity of the floor function I have $$ N_x \leq \frac{n_x+31}{32} \Rightarrow 32N_x-31 \leq n_x $$ which gives me the lower bound on $n_x$ for the upper bound $$ N_x > \frac{n_x + 31}{32} - 1 = \frac{n_x - 1}{32} \geq 0 \Rightarrow N_x \geq \frac{n_x}{32} \Rightarrow n_x \leq 32N_x $$ Assuming everything is correct I'm puzzled if $N_x \geq \frac{n_x}{32}$, then I don't actually like the proof I just gave since it involves the guessing of the function $N_x(n_x)$, while instead I was specifically interested only in the existence of a unique solution, I'm not sure also that I've proved that the solution I gave is the unique one. I proved that there's a solution but not that such solution is eventually the unique one. Is there a better way to prove the uniqueness of the solution without passing through the function I defined?
OpenCV #006 Sobel operator and Image gradient Digital Image Processing using OpenCV (Python & C++) Highlights: In this post, we will learn what Sobel operator and an image gradient are. We will show how to calculate the horizontal and vertical edges as well as edges in general. What is the most important element in the image? Edges! See below. Tutorial Overview: 1. What is the Gradient? Let’s talk about differential operators. When differential operators are applied to an image, it returns some derivative. These operators are also known as masks or kernels. They are used to compute the image gradient function. And after this has been completed, we need to apply a threshold to assist in selecting the edge of pixels. And what is then multivariate? Multivariate means that a function is actually a function of more than one variable. For example, an image is a function of two variables, \(x \) and \(y \). When we have functions that are a function of more than one variable, we refer to it as a partial derivative. Which is the derivative in the \(x \) or in the \(y \) direction. And the gradient is the vector that’s made up of those derivatives. $$ \bigtriangledown f= \left [ \frac{\partial f}{\partial x},\frac{\partial f}{\partial y} \right ] $$ So what operator are we going to use in showing the gradient? It’s the vector above. If we recall, what an image is. It is a two vector, made up \(\partial f \) with respect to \(x \), (The gradient of the image in the x direction) and the \(\partial f \) with respect to \(y \). Base on the vector above, the image only changes in the x-direction. So, the gradient would be whatever the changes are in the x-direction and \(0 \) in the y-direction. The same method can be applied to the changes in the y-direction. Python C++ #include <iostream>#include <opencv2/opencv.hpp> using namespace std;using namespace cv; int main(){ Mat image = cv::imread("dragon.jpg", IMREAD_GRAYSCALE); cv::imshow("Dragon", image); cv::waitKey(); // we have already explained linear filters for horizontal and vertical edge detection // reference -->> CNN #002, CNN #003 posts. cv::Mat image_X; // this is how we can create a horizontal edge detector // Sobel(src_gray, dst, depth, x_order, y_order) // src_gray: The input image. // dst: The output image. // depth: The depth of the output image. // x_order: The order of the derivative in x direction. // y_order: The order of the derivative in y direction. // To calculate the gradient in x direction we use: x_order= 1 and y_order= 0. // To calculate the gradient in x direction we use: x_order= 0 and y_order= 1. cv::Sobel(image, image_X, CV_8UC1, 1, 0); cv::imshow("Sobel image", image_X); cv::waitKey(); As we can see, the direction of the gradient in y is getting more positive as you go down. In the sense that, it’s getting brighter in this direction. So in our image, the gradient would be 0, \(\partial f \) respect to \(x \), yet, we would have \(\partial f \) respect to \(y \). Python C++ cv::Mat image_Y; // this is how we can create a vertical edge detector. cv::Sobel(image, image_Y, CV_8UC1, 0, 1); cv::imshow("Sobel image", image_Y); cv::waitKey(); And, of course we have changes in both directions and that’s the gradient itself. It’s \(\partial f \) respect to \(x \), and the \(\partial f \) respect to \(y \). It has both magnitudes, which shows how things are getting brighter. And also an angle \(\theta \) that represents the direction the intensity is increasing. Here we are just expressing all of this. Python C++ // When we combine the horizontal and vertical edge detector together cv::Mat sobel = image_X + image_Y; cv::imshow("Sobel - L1 norm", sobel); cv::waitKey(); The gradient of an image – Here, the gradient is given by the two partial derivatives. \(\partial f \) respect to \(x \), and the \(\partial f \) respect to \(y \). $$ \bigtriangledown f= \left [ \frac{\partial f}{\partial x},\frac{\partial f}{\partial y} \right ] $$ The gradient direction is given by – The direction can be computed by computing the arctangent. This is the arctan of the changes in \(y \) over the change in \(x \). Sometimes, it’s also recommended to use atan2. So if the \(\partial f \) with respect to \(x \) is zero, that your machine doesn’t explode. $$ \theta = tan^{-1}\left ( \frac{\partial f}{\partial y}/\frac{\partial f}{\partial x} \right ) $$ The amount of change is given by the gradient magnitude – This shows us how rapidly the function is changing, which is very much related to finding edges by looking for large magnitudes of gradients on the image. $$ \left \| \bigtriangledown f \right \|= \sqrt{\left ( \frac{\partial f}{\partial x} \right )^{2}+\left ( \frac{\partial f}{\partial y} \right )^{2}} $$ 2. Finite Differences The above calculus story is just fine. However, how do we compute the gradients in our images because we work with discrete variables and not with continuous ones. Let’s see more about discrete gradients: $$ \frac{ \partial f\left ( x,y \right )}{\partial x}= \lim_{\varepsilon \rightarrow 0}\frac{f\left ( x+\varepsilon ,y \right )-f\left ( x,y \right )}{\varepsilon } $$ In continuous math, the partial derivative \(\partial f \) with respect to \(x \) is this limit. So we move a little bit in the \(x \) direction, subtract off the original one and divide by \(\varepsilon \). Meanwhile, when the limit reaches zero, that becomes our derivatives. In the discrete world, we can’t move closer. We have to take the finite difference. $$ \frac{ \partial f\left ( x,y \right )}{\partial x}\approx \frac{f\left ( x+1,y \right )-f\left ( x,y \right )}{1 }\approx f\left ( x+1,y \right )-f\left ( x,y \right ) $$ When approximating our partial by finite difference. We take one step in the \(x \) direction, then subtract off the original and divide by \(1 \). Since that’s how big of a step we took. So that’s become the value \(f\left (x+1,y \right )-f\left ( x,y \right ) \) . In other words, this is called the right derivative because it takes one step to the right. For instance, let’s take a closer look at our finite differences, to think about these derivatives the right way. A simple illustration of finite difference. We have a picture of this dragon head. This is a good illustration of those finite difference images. First question, is this the finite difference in \(x \) or in \(y \)? Let’s have a look. Going through the image in the \(x \)-direction, we get some kind of transition between these vertical stripes. To clarify, we get a change from bright values, to dark values, then bright values again, across the image horizontally. In addition, we can hardly see any changes in \(y \), but only in \(x \). Hence, this is going to be a finite difference in \(x \). The image we are presenting contains both negative and positive numbers. When showing an image, we make black-0, and other numbers white. However, it is preferable to say some minimum values are black, and some maximum is white. So we can make \(-128 \) to be black, \(+ 127 \) to be white. The in-between values will be gray (also zero values). 3. The Discrete Gradient How do we pick an “operator” (mask/kernel) that can be applied to any image to implement these gradients? Well, here is an example below of an operator \(H \). So looking at this operator, with 3 rows and 2 columns above. Is this a good operator? Well, you figured it. NO!!!. Why is that? One reason is there are no middle pixel values. Now you might ask yourself, why is it a plus half and minus a half? Um… A good question, only if you did wonder why. It is the average or normalization of the right derivative and the left derivative. The right derivative would be a \(+1 \) at the right of the kernel, and \(-1 \) at the middle. The left derivative would be a \(+1 \) at the left of the kernel, and \(-1 \) at the middle. To get the average, we would add them and divide by two. Which is the sum. So, we get \(-1 \) (left), \(0 \) (middle), \(1 \) (right). Then when we divided by two to get \(-\frac{1}{2} \) (left), \(+\frac{1}{2} \) (right) with a \(0 \) in the middle. 4. Sobel Operator The most common filter for doing derivatives and edges is the Sobel operator. It was named after \(Irwin Sobel\) and \(Gary Feldman \), after presenting their idea about an “Isotropic 3×3 Image Gradient Operator” in 1968. The operator looks like the image below. But instead of \(-\frac{1}{2} \) and \(+\frac{1}{2} \), it’s got this weird thing where it’s doing these eighths. And you can see that it does, not only \(+2 \), \(-2 \), which we would then divide by \(4 \) and we get the same value. But it also does a \(+1 \), \(-1 \) on the row above, and below the row. The idea is, to compute a derivative at a pixel, we won’t look left, and right at ourselves but also look nearby. Another equation to be familiar with By the way, the \(y \) is here as well, and in this case, \(y \) is positive going up. Remember, it can go in either direction. Sobel gradient is: Made up of the application of this \(s_ {x} \) and \(s_ {y} \) to get us these values. $$ \bigtriangledown I= \begin{bmatrix}g_{x} & g_{y}\end{bmatrix}^{T} $$ The gradient magnitude: Is the square root of the sum of squares. $$ g= \left ( g_{x}^{2}+g_{y}^{2} \right )^{1/2} $$ The gradient direction: Is what we did before. And here, it is \(arctan2 \) we were talking about to get the gradient. $$ \theta = atan2\left ( g_{y},g_{x} \right ) $$ Here is an example. The picture on the left is an image. The one in the middle is a gradient magnitude. We applied the Sobel operator, then took the square root of the sum of squares. Then threshold it. And you will notice two things. One, it’s not an awful edge image and it’s not a great edge image. So we are partly towards getting that done. Python C++ // this idea is inspired from the book // "Robert Laganiere Learning OpenCV 3:: computer vision" // what it actually does, makes the non-edges to white values // and edges to dark values, so that it is more common for our visual interpretation. // this is done according to formula // sobelImage = - alpha * sobel + 255; double sobmin, sobmax; cv::minMaxLoc(sobel, &sobmin, &sobmax); cv::Mat sobelImage; sobel.convertTo(sobelImage, CV_8UC1, -255./sobmax, 255); cv::imshow("Edges with a sobel detector", sobelImage); cv::waitKey(); cv::Mat image_Sobel_thresholded; double max_value, min_value; cv::minMaxLoc(sobelImage, &min_value, &max_value); //image_Laplacian = image_Laplacian / max_value * 255; cv::threshold(sobelImage, image_Sobel_thresholded, 20, 255, cv::THRESH_BINARY); cv::imshow("Thresholded Sobel", image_Sobel_thresholded); cv::waitKey(); // Also, very popular filter for edge detection is Laplacian operator // It calculates differences in both x and y direction and then sums their amplitudes. cv::Mat image_Laplacian; // here we will apply low pass filtering in order to better detect edges // try to uncomment this line and the result will be much poorer. cv::GaussianBlur(image, image, Size(5,5), 1); cv::Laplacian(image, image_Laplacian, CV_8UC1); cv::imshow("The Laplacian", image_Laplacian); cv::waitKey(); cv::Mat image_Laplacian_thresholded; double max_value1, min_value1; cv::minMaxLoc(image_Laplacian, &min_value1, &max_value1); //image_Laplacian = image_Laplacian / max_value * 255; cv::threshold(sobel, image_Laplacian_thresholded, 70, 220, cv::THRESH_BINARY); cv::imshow("Thresholded Laplacian", image_Laplacian_thresholded); cv::waitKey(); return 0;} Summary If you are just starting with an image processing, this post gave you insights on how we can calculate image gradients. We will use these ideas heavily in image processing and they will assist us in how to detect edges and objects.
We found that the rotational wavefunctions are functions called the Spherical Harmonics, and that these functions are products of Associated Legendre Functions and the \(e^{im} \varphi\) function. Two quantum numbers, \(J\) and \(m_J\), are associated with the rotational motion of a diatomic molecule. The quantum numbers identify or specify the particular functions that describe particular rotational states. The functions are written as \[ \psi _{J,m_J} (\theta , \varphi) = Y_{J, m_J} (\theta , \varphi) = \Theta ^{|m_J|}_J (\theta) \Phi _{m_J} (\varphi) \tag {7-66}\] The absolute square of the wavefunction evaluated at a particular \((\theta , \varphi)\) gives the probability density for finding the internuclear axis aligned at these angles. Constraints on the wavefunctions arose from boundary conditions, the requirement that the functions be single valued, and the interpretation of the functions as probability amplitudes. The Spherical Harmonic functions for the rigid rotor have these necessary properties only when \(|m_J| \le J\) and mJ is an integer. J is the upper limit to the value of mJ, but there is no upper limit to the value for J. The subscript J is added to mJ as a reminder that J controls the allowed range of mJ. The angular momentum of a rotating diatomic molecule is quantized by the same constraints that quantize the energy of a rotating system. As summarized in the table below, the rotational angular momentum quantum number, J, specifies both the energy and the square of the angular momentum. The z-component of the angular momentum is specified by mJ. Rotational spectra consist of multiple lines spaced nearly equally apart because many rotational levels are populated at room temperature and the rotational energy level spacing increases by approximately \(2B\) with each increase in \(J\). The rotational constant, \(B\), can be used to calculate the bond length of a diatomic molecule. The spectroscopic selection rules for rotation, shown in the Overview table, allow transitions between neighboring J states with the constraint that mJ change by 0 or 1 unit. Additionally, the molecule must have a non-zero dipole moment in order to move from one state to another by interacting with electromagnetic radiation. The factors that interact to control the line intensities in rotational spectra \((\gamma _{max})\) include the magnitude of the transition moment, \(\mu _T\), and the population difference between the initial and final states involved in the transition, \(\Delta n\). So far you have seen three different quantum mechanical models (the particle-in-a-box, the harmonic oscillator, and the rigid rotor) that can be used to describe chemically interesting phenomena (absorption of light by cyanine dye molecules, the vibration of molecules to determine bond force constants, and the rotation of molecules to determine bond lengths). For these cases, you should remember the chemical problem, the form of the Hamiltonian, and the characteristics of the wavefunctions (i.e. the names of the functions, and their mathematical and graphical forms). Also remember the associated energy level structure, values for the quantum numbers, and selection rules for electric-dipole transitions. As we shall see in the following chapter, the selection rules for the rigid rotor also apply to the hydrogen atom and other atoms because the atomic wavefunctions include the same spherical harmonic angular functions, eigenfunctions of the angular momentum operators \(\hat {M} ^2\) and \(\hat {M} _z\). The selection rules result from transition moment integrals that involve the same angular wavefunctions and therefore are the same for rotational transitions in diatomic molecules and electronic transitions in atoms. Exercise \(\PageIndex{1}\) Complete the table below. For an example of a completed table, see Chapter 4. Overview of key concepts and equations for the Rigid Rotor Potential energy Hamiltonian Wavefunctions Quantum Numbers Energies Spectroscopic Selection Rules Angular Momentum Properties Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Given that the eigenstates of the position operator can be written as $\delta(x-x')$, and suppose we measure a particle in an infinite potential with walls at $x=0$ and $x=L$. I measure the particle to be in the position $x=L/2$, so the particle is in the eigenstate $ |x \rangle = \delta(x-L/2)$. Suppose now that I want to measure the energy of the particle. The eigenstates of the energy operator are given by: $$ |\psi_n\rangle = \sqrt{\frac{2}{L}}\sin \left( \frac{n\pi x}{L} \right) $$ In order to measure energy I understand that I have to expand the original eigenstate in terms of the new energy eigenstates: $$ |x\rangle = \sum|\psi_n\rangle\langle\psi_n|x\rangle $$ where the probability of collapse into the eigenstate is given by: $$ P_n = |\langle\psi_n|x\rangle|^2 $$ But now I sort of run into an issue. Sure then, I can say that: $$ \langle\psi_n|x\rangle = \int \sqrt{\frac{2}{L}}\sin \left( \frac{n\pi x}{L} \right)\delta(x-L/2)dx $$ and since $$ \int \delta(x-x')f(x)dx = f(x') $$ I can say $$ \langle\psi_n|x\rangle = \sqrt{\frac{2}{L}}\sin \left( \frac{n\pi }{2} \right) $$ and, $$ P_n=|\langle\psi_n|x\rangle|^2 = \frac{2}{L}\sin^2 \left( \frac{n\pi}{2} \right) $$ I know that this means that all odd values of n are equally probably and all even values are not possible, but probability is supposed to be dimensionless, so what's happened here? What rookie error have I made?
I'm a little bit confused with the weak equation of Euler-Lagrange since it looks to have severals form of weak equations. Let $\Omega=(a,b)\subset \mathbb R$ and $f\in \mathcal C^0(\bar\Omega\times \mathbb R\times \mathbb R )$, $f=f(x,u,\xi)$. The differents weak for I have are: 1) $$\int_a^b (f_u \varphi+f_\xi \varphi')=0,\quad \forall v\in \mathcal C_0^\infty (a,b).$$ 2) $$\int_a^b(f_u \varphi+f_\xi \varphi'),\quad \forall \varphi\in W_0^{1,p}(a,b).$$ So why in one case we take $\varphi\in \mathcal C_0^\infty (a,b)$ and in an other case $\varphi\in W_0^{1,p}(a,b)$. It's a little bit confusing for me.
Hello guys! I was wondering if you knew some books/articles that have a good introduction to convexity in the context of variational calculus (functional analysis). I was reading Young's "calculus of variations and optimal control theory" but I'm not that far into the book and I don't know if skipping chapters is a good idea. I don't know of a good reference, but I'm pretty sure that just means that second derivatives have consistent signs over the region of interest. (That is certainly a sufficient condition for Legendre transforms.) @dm__ yes have studied bells thm at length ~2 decades now. it might seem airtight and has stood the test of time over ½ century, but yet there is some fineprint/ loopholes that even phd physicists/ experts/ specialists are not all aware of. those who fervently believe like Bohm that no new physics will ever supercede QM are likely to be disappointed/ dashed, now or later... oops lol typo bohm bohr btw what is not widely appreciated either is that nonlocality can be an emergent property of a fairly simple classical system, it seems almost nobody has expanded this at length/ pushed it to its deepest extent. hint: harmonic oscillators + wave medium + coupling etc But I have seen that the convexity is associated to minimizers/maximizers of the functional, whereas the sign second variation is not a sufficient condition for that. That kind of makes me think that those concepts are not equivalent in the case of functionals... @dm__ generally think sampling "bias" is not completely ruled out by existing experiments. some of this goes back to CHSH 1969. there is unquestioned reliance on this papers formulation by most subsequent experiments. am not saying its wrong, think only that theres very subtle loophole(s) in it that havent yet been widely discovered. there are many other refs to look into for someone extremely motivated/ ambitious (such individuals are rare). en.wikipedia.org/wiki/CHSH_inequality @dm__ it stands as a math proof ("based on certain assumptions"), have no objections. but its a thm aimed at physical reality. the translation into experiment requires extraordinary finesse, and the complex analysis starts with CHSH 1969. etc While it's not something usual, I've noticed that sometimes people edit my question or answer with a more complex notation or incorrect information/formulas. While I don't think this is done with malicious intent, it has sometimes confused people when I'm either asking or explaining something, as... @vzn what do you make of the most recent (2015) experiments? "In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally." @dm__ yes blogged on those. they are more airtight than previous experiments. but still seem based on CHSH. urge you to think deeply about CHSH in a way that physicists are not paying attention. ah, voila even wikipedia spells it out! amazing > The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. > A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment. ↑ suspect entire general LHV theory of QM lurks in these loophole(s)! there has been very little attn focused in this area... :o how about this for a radical idea? the hidden variables determine the probability of detection...! :o o_O @vzn honest question, would there ever be an experiment that would fundamentally rule out nonlocality to you? and if so, what would that be? what would fundamentally show, in your opinion, that the universe is inherently local? @dm__ my feeling is that something more can be milked out of bell experiments that has not been revealed so far. suppose that one could experimentally control the degree of violation, wouldnt that be extraordinary? and theoretically problematic? my feeling/ suspicion is that must be the case. it seems to relate to detector efficiency maybe. but anyway, do believe that nonlocality can be found in classical systems as an emergent property as stated... if we go into detector efficiency, there is no end to that hole. and my beliefs have no weight. my suspicion is screaming absolutely not, as the classical is emergent from the quantum, not the other way around @vzn have remained civil, but you are being quite immature and condescending. I'd urge you to put aside the human perspective and not insist that physical reality align with what you expect it to be. all the best @dm__ ?!? no condescension intended...? am striving to be accurate with my words... you say your "beliefs have no weight," but your beliefs are essentially perfectly aligned with the establishment view... Last night dream, introduced a strange reference frame based disease called Forced motion blindness. It is a strange eye disease where the lens is such that to the patient, anything stationary wrt the floor is moving forward in a certain direction, causing them have to keep walking to catch up with them. At the same time, the normal person think they are stationary wrt to floor. The result of this discrepancy is the patient kept bumping to the normal person. In order to not bump, the person has to walk at the apparent velocity as seen by the patient. The only known way to cure it is to remo… And to make things even more confusing: Such disease is never possible in real life, for it involves two incompatible realities to coexist and coinfluence in a pluralistic fashion. In particular, as seen by those not having the disease, the patient kept ran into the back of the normal person, but to the patient, he never ran into him and is walking normally It seems my mind has gone f88888 up enough to envision two realities that with fundamentally incompatible observations, influencing each other in a consistent fashion It seems my mind is getting more and more comfortable with dialetheia now @vzn There's blatant nonlocality in Newtonian mechanics: gravity acts instantaneously. Eg, the force vector attracting the Earth to the Sun points to where the Sun is now, not where it was 500 seconds ago. @Blue ASCII is a 7 bit encoding, so it can encode a maximum of 128 characters, but 32 of those codes are control codes, like line feed, carriage return, tab, etc. OTOH, there are various 8 bit encodings known as "extended ASCII", that have more characters. There are quite a few 8 bit encodings that are supersets of ASCII, so I'm wary of any encoding touted to be "the" extended ASCII. If we have a system and we know all the degrees of freedom, we can find the Lagrangian of the dynamical system. What happens if we apply some non-conservative forces in the system? I mean how to deal with the Lagrangian, if we get any external non-conservative forces perturbs the system?Exampl... @Blue I think now I probably know what you mean. Encoding is the way to store information in digital form; I think I have heard the professor talking about that in my undergraduate computer course, but I thought that is not very important in actually using a computer, so I didn't study that much. What I meant by use above is what you need to know to be able to use a computer, like you need to know LaTeX commands to type them. @AvnishKabaj I have never had any of these symptoms after studying too much. When I have intensive studies, like preparing for an exam, after the exam, I feel a great wish to relax and don't want to study at all and just want to go somehwere to play crazily. @bolbteppa the (quanta) article summary is nearly popsci writing by a nonexpert. specialists will understand the link to LHV theory re quoted section. havent read the scientific articles yet but think its likely they have further ref. @PM2Ring yes so called "instantaneous action/ force at a distance" pondered as highly questionable bordering on suspicious by deep thinkers at the time. newtonian mechanics was/ is not entirely wrong. btw re gravity there are a lot of new ideas circulating wrt emergent theories that also seem to tie into GR + QM unification. @Slereah No idea. I've never done Lagrangian mechanics for a living. When I've seen it used to describe nonconservative dynamics I have indeed generally thought that it looked pretty silly, but I can see how it could be useful. I don't know enough about the possible alternatives to tell whether there are "good" ways to do it. And I'm not sure there's a reasonable definition of "non-stupid way" out there. ← lol went to metaphysical fair sat, spent $20 for palm reading, enthusiastic response on my leadership + teaching + public speaking abilities, brought small tear to my eye... or maybe was just fighting infection o_O :P How can I move a chat back to comments?In complying to the automated admonition to move comments to chat, I discovered that MathJax is was no longer rendered. This is unacceptable in this particular discussion. I therefore need to undo my action and move the chat back to comments. hmmm... actually the reduced mass comes out of using the transformation to the center of mass and relative coordinates, which have nothing to do with Lagrangian... but I'll try to find a Newtonian reference. One example is a spring of initial length $r_0$ with two masses $m_1$ and $m_2$ on the ends such that $r = r_2 - r_1$ is it's length at a given time $t$ - the force laws for the two ends are $m_1 \ddot{r}_1 = k (r - r_0)$ and $m_2 \ddot{r}_2 = - k (r - r_0)$ but since $r = r_2 - r_1$ it's more natural to subtract one from the other to get $\ddot{r} = - k (\frac{1}{m_1} + \frac{1}{m_2})(r - r_0)$ which makes it natural to define $\frac{1}{\mu} = \frac{1}{m_1} + \frac{1}{m_2}$ as a mass since $\mu$ has the dimensions of mass and since then $\mu \ddot{r} = - k (r - r_0)$ is just like $F = ma$ for a single variable $r$ i.e. an spring with just one mass @vzn It will be interesting if a de-scarring followed by a re scarring can be done in some way in a small region. Imagine being able to shift the wavefunction of a lab setup from one state to another thus undo the measurement, it could potentially give interesting results. Perhaps, more radically, the shifting between quantum universes may then become possible You can still use Fermi to compute transition probabilities for the perturbation (if you can actually solve for the eigenstates of the interacting system, which I don't know if you can), but there's no simple human-readable interpretation of these states anymore @Secret when you say that, it reminds me of the no cloning thm, which have always been somewhat dubious/ suspicious of. it seems like theyve already experimentally disproved the no cloning thm in some sense.
I have a small confusion in interpreting various forms of energy. Suppose two particles move towards each other with velocity $v$ and stick to each other as seen by an observer on the ground. The collision is not elastic. From Newtonian point of view, energy is lost in the collision and is liberated in the form of heat and sound (may be other forms as well). But when we define $E$ as $\gamma mc^2$, we don't need to include any such energy losses. Why? Heat and sound are still produced, so where are those contributions in the expression of energy? EDIT: When I say that losses are not included, I mean something like this; when writing down energy conservation using relativistic expression, we end up getting final mass greater than initial mass and we say that the energy missing is in the form of excess mass. So I asked that what about the energy in the form of heat and sound? Whatever expression you use, if heat and sound are produced, they are produced, regardless of the dynamics. So why do we account for energy losses as heat and sound for Newtonian case while we say that that energy is in excess mass while using relativistic expression when there is heat and sound in both cases? EDIT-2: Replying to @enumaris Applying energy conservation in the ground frame: $$\frac{2m_{oi}c^2}{\sqrt{1-\frac{v^2}{c^2}}}=2m_{of}c^2$$ So the arguement that one needs is that final rest mass $m_{of}=m_{oi}/\sqrt{1-\frac{v^2}{c^2}}$. Here I didn't write down the energy losses due to other forms and used only rest mass arguement.
Search Now showing items 1-6 of 6 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC (Springer, 2013-09) We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Hamiltonian in both cases acts on different objects so it must be different mathematical entity. The trick is $$\left<x\right|\hat{H}\left|\Psi(t)\right>=\int dx'\left<x\right|\hat{H}\left|x'\right>\left<x'\right|\left.\Psi(t)\right>=\left<x\right|\hat{H}\left|x\right>\left<x\right|\left.\Psi(t)\right>$$ Edit: The second equality Holds if hamiltonian is diagonal in position basis. That this is true for standard form hamiltonian can be shown computing matrix elements: $$H_{xx'}=\left<x\right|\hat{H}\left|x'\right>=\left<x\right|\left(\frac{\hat{P}^2}{2m}+\hat{V}\right)\left|x'\right>=\left<x\right|\frac{\hat{P}^2}{2m}\left|x'\right>+\left<x\right|\hat{V}\left|x'\right>$$The potential term is diagonal if it is defined as function of position operator $\hat{x}\left|x\right>=x\left|x\right>,$ which it usually is and this operator is by definition diagonal in position basis. Less trivial is the momentum operator, which is defined as generator of translation:$$\left|x+dx\right>=\left(1-i\hat{P}dx/\hbar\right)\left|x\right>$$ From this definition we can compute:$$\left<x\right|\hat{P}\left|x'\right>=i\hbar\frac{\left<x\right.\left|x'\right>-\left<x\right.\left|x'-dx''\right>}{dx''}$$In the limit $dx''\rightarrow 0$ this shows, that momentum operator is indeed diagonal in position basis, albeit there is distribution on the diagonal instead of ordinary numbers as it was in the case of potential. And since momentum operator is diagonal, so it is its second power and thus also the kinetic term of hamiltonian. To continue the journey to the standard equation for wave function we can write for momentum$$\int dx'\left<x\right|\hat{P}\left|x'\right>\left<x'\right|\left.\Psi(t)\right>=i\hbar\int dx'\frac{\left<x\right.\left|x'\right>-\left<x\right.\left|x'-dx''\right>}{dx''}\left<x'\right|\left.\Psi(t)\right>=i\hbar\int dx'\frac{\delta(x-x')-\delta(x-x'+dx'')}{dx''}\left<x'\right|\left.\Psi(t)\right>=i\hbar\frac{\Psi(x,t)-\Psi(x+dx'',t)}{dx''} \rightarrow -i\hbar\frac{d}{dx}\Psi(x,t)$$In similar fashion you can get the second power of momentum operator and then adding the potential function you get standard hamiltonian in wave mechanics.
I want to find correlation coeffitiont between $W_t$ and $\int_{0}^{t}W_s ds$. I think that these are uncorrelated. But Why? So thanks Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community if you talk about correlation then: compute expectation: $$\mathbb{E}(W_t)=0\text{ and }\mathbb{E}(\int_0^tW_d ds)=0$$ variance: $$\text{Var}(W_t)=t\text{ and }\text{Var}(\int_0^tW_s ds)=\frac{t^3}{3}$$ covariance: $$\mathbb{E}(W_t\int_{0}^tW_sds)=\int_{0}^t\mathbb{E}(W_tW_s)ds=\int_0^tsds=\frac{t^2}{2}$$ then you get: $$\text{Corr}(W_t,\int_0^tW_s ds)= \frac{\sqrt{3}}{2}$$ $$\mathbb{E}(W_uW_s)=\min(u,s)$$ $$\text{Var}(\int_0^tW_sds)=\mathbb{E}(\int_0^t\int_0^tW_sW_u duds)$$
№ 8 All Issues Volume 64, № 8, 2012 Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1011-1024 We determine the exact values of upper bounds of the error of approximation by harmonic splines for functions $u$ defined on an $n$-dimensional parallelepiped $\Omega$ forwhich $||\Delta u||_{L_{\infty}(\Omega)} \leq 1$ and for functions $u$ defined on $\Omega$ forwhich $||\Delta u||_{L_{p}(\Omega)} \leq 1, \quad 1 \leq p \leq \infty$. In the first case, the error is estimated in $L_{p}(\Omega), \quad 1 \leq p \leq \infty$; in the second case, it is estimated in $L_{1}(\Omega)$. Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1025-1032 We consider the problem of the best polynomial approximation of $2\pi$-periodic functions in the space $L_2$ in the case where the error of approximation $E_{n-1}(f)$ is estimated in terms of the $k$th-order modulus of continuity $\Omega_k(f)$ in which the Steklov operator $S_h f$ is used instead of the operator of translation $T_h f (x) = f(x + h)$. For the classes of functions defined using the indicated smoothness characteristic, we determine the exact values of different $n$-widths. Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1033-1040 We describe derived categories of coherent sheaves over nodal noncommutative curves of string and almost string types. Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1041-1052 We obtain exact-order estimates for the trigonometric widths of the classes $B^{\Omega}_{p\theta}$ of periodic functions of many variables in the space $L_q$ for some relations between the parameters $p$ and $q$. Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1053-1066 This paper is a continuation of our investigation on the truncated matrix trigonometric moment problem begun in Ukr. Mat. Zh. - 2011. - 63, № 6. - P. 786-797. In the present paper, we obtain the Nevanlinna formula for this moment problem in the general case. We assume here that there is more than one moment and the moment problem is solvable and has more than one solution. The coefficients of the corresponding matrix linear fractional transformation are expressed in explicit form via prescribed moments. Simple determinacy conditions for the moment problem are presented. Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1067-1079 We prove a theorem on the existence and uniqueness and obtain a representation using the Green vector function for the solution of the Cauchy problem $$u^{(\beta)}_t + a^2(-\Delta)^{\alpha/2}u = F(x, t), \quad (x, t) \in \mathbb{R} ^n \times (0, T], \quad a = \text{const} $$ $$u(x, 0) = u_0(x), \quad x \in \mathbb{R} ^n$$ where $u^{(\beta)}_t$ is the Riemann-Liouville fractional derivative of order $\beta \in (0,1)$, and $u_0$ and $F$ belong to some spaces of generalized functions. We also establish the character of the singularity of the solution at $t = 0$ and its dependence on the order of singularity of the given generalized function in the initial condition and the character of the power singularities of the function on right-hand side of the equation. Here, the fractional $n$-dimensional Laplace operator $\mathfrak{F}[(-\Delta)^{\alpha/2} \psi(x)] = |\lambda|^{\alpha} \mathfrak{F}[\psi(x)]$. Periodic solutions of a parabolic equation with homogeneous Dirichlet boundary condition and linearly increasing discontinuous nonlinearity Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1080-1088 We consider a resonance problem of the existence of periodic solutions of parabolic equations with discontinuous nonli-nearities and a homogeneous Dirichlet boundary condition. It is assumed that the coefficients of the differential operator do not depend on time, and the growth of the nonlinearity at infinity is linear. The operator formulation of the problem reduces it to the problem of the existence of a fixed point of a convex compact mapping. A theorem on the existence of generalized and strong periodic solutions is proved. Asymptotic m-phase soliton-type solutions of a singularly perturbed Korteweg?de Vries equation with variable coefficients. II Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1089-1105 We consider the problem of the construction of higher terms of asymptotic many-phase soliton-type solutions of the singular perturbed Korteweg – de Vries equation with variable coefficients. The accuracy with which the obtained asymptotic solution satisfies the original equation is determined. Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1106-1120 We obtain exact-order estimates for the best bilinear approximations of the classes $S_{p, \theta}^{\Omega} B$ of periodic functions of two variables in the space $L_q$ for some relations between the parameters $p, q, \theta$. Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1121-1131 We study the relaxed elastic line in a more general case on an oriented surface. In particular, we obtain a differential equation with three boundary conditions for the generalized relaxed elastic line. Then we analyze the results in a plane, on a sphere, on a cylinder, and on the geodesics of these surfaces. Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1132-1137 A lower bound is found in the law of the iterated logarithm for the maximum scheme. An admissible estimator for the r th power of a bounded scale-parameter in a subclass of the exponential family under entropy loss function Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1138-1147 We consider an admissible estimator for the rth power of a scale parameter that is lower or upper bounded in a subclass of the scale-parameter exponential family under entropy loss function. An admissible estimator of a bounded parameter in the family of transformed chi-square distributions is also given. Ukr. Mat. Zh. - 2012. - 64, № 8. - pp. 1148-1152 We obtain an asymptotic equality for the upper bounds of deviations of Fejer means on the Zygmund class of functions holomorphic in the unit disk.
I would like to find out if this integral converges: $$\int_{-\infty}^{\infty} e^{-\sqrt{|x|}}\,\mathrm{d}x$$ Since this is a symmetric function I figured I could focus on only one side of the integral, namely $\displaystyle\int_{0}^{\infty} e^{-\sqrt{|x|}}\,\mathrm{d}x$ which in this case is equivalent to $\displaystyle\int_{0}^{\infty} e^{-\sqrt{x}}\,\mathrm{d}x$ (since $|x| = x$ when $x > 0$) Also, $e^{-\sqrt{x}}$ is bounded from 0 to 1 meaning the integral there is a constant, so I will use the integral from 1 to $\infty$. I know this converges (checked with a calculator) but cannot seem to find an argument for the comparison test to say that since $e^{-\sqrt{x}} < $ "some other function which converges" for $x > 1$, thus $\displaystyle\int_1^{\infty} e^{-\sqrt{x}}\,\mathrm{d}x$ converges. In other words, I need a function which is always greater than $e^{-\sqrt{x}}$ and whose integral converges. I know that $e^{-x}$ and $e^{-2x}$ both converge, but these are both smaller than $e^{-\sqrt{x}}$ for $x > 1$. Tips would be appreciated. Thank you.
Partners & sponsors Sessions Time Session 8.30am 9am When a large mine reaches the end of its economic life miners are required to return the mining site to as near as possible its original pre-mining state. Some large mines in Australia are expected to reach this point in the next decade or so, and mining companies are starting to plan for site rehabilitation. Many open pit mines extend to depths below the water table, and so require continuous dewatering for dry operation. Over decades of a mining operation this dewatering can significantly affect the surrounding water table, as well as set in train hydrological processes that will persist after cessation of mining operations. Hydrological rehabilitation of a mine site is an important part of any overall mine site rehabilitation, but it presents many unique challenges. Firstly, the hydrological state of a mine site and its surrounds can only be inferred and is unlikely to be known with much certainty. At a more fundamental level, groundwater flow is a diffusion like process and so it is not easy to entirely “undo” the effects of historical dewatering in “finite” time. For these and other reasons hydrological rehabilitation shares many features of inverse problems. The best way to carry out such rehabilitation seems far from clear at the present time. In this talk we will discuss from a mathematical point of view some of factors that need to be considered in developing a long-term rehabilitation plan. RNAi interference (RNAi) is a gene silencing mechanism requiring nucleotide base pairing of two RNA strands to form double-stranded RNA (dsRNA). Traditional RNAi in plants is based dsRNA of canonical Watson-Crick base pairs, namely Guanine:Cytosine (G:C) and Adenine:Uracil (A:U) base pairs, but production of such dsRNA requires perfectly inverted repeat DNA structures that can attract gene inactivation reducing RNAi efficacy. Including G:U Wobble base pairs in the dsRNA design disrupts inverted repeat DNA structures therefore giving efficient RNAi in plants. In 2011 I was approached by Greg Davis to solve an unknown free surface problem of steady state two dimensional diffusion of petrol vapour upwards in the soil, reacting with oxygen diffusing down from the surface, with an impermeable boundary on part of the upper surface. I devised a transformation to convert the problem to a mixed boundary value problem for the Laplace equation for one variable on a known domain. I expected to solve this problem numerically using finite difference methods. I remembered that in the mid 1980s Bob Anderssen and Frank de Hoog had worked with Francis Rose on propagating crack problems, and had solved the resultant mixed boundary value problems for the Laplace equation on a strip. I was able to transform the petrol vapour problem and use their previous solution, which gives surprising simple explicit criteria for the buildup of petrol vapour under an obstruction on the surface. I will discuss some extensions. We have recently presented an integrable discrete model of one-dimensional soil water infiltration. It is based on the continuum model by Broadbridge and White: a nonlinear convection-diffusion equation with a nonlinear flux boundary condition at the surface. This is transformed to the Burgers equation with a time-dependent flux term by the hodograph transformation. Our discrete model preserves underlying integrability, and takes the form of a self-adaptive moving mesh scheme. The discretisation builds on linearisability of the Burgers equation, producing the linear diffusion equation. Naive discretisation of the linearised equation using the Euler scheme is often used in the theory of discrete integrable systems, but this does not necessarily produce a good numerical method for the original equation. Taking desirable properties of a numerical scheme into account, we propose an alternative discrete model that produces solutions with similar accuracy to direct computation on the original nonlinear equation, but with clear benefits regarding computational cost. Time Session 10:15am Time Session 10:45 Many evolution equations involve a memory term with a weakly singular kernel. In a numerical scheme with $N$ time steps, evaluating this term in the obvious way costs order $N^2$ operations. I will describe a simple, fast algorithm that reduces the cost to order $N \log N$ operations. The algorithm relies on approximating the kernel by a sum of negative exponentials. Completely monotone functions arise in linear viscoelasticity, with negative exponentials being the most familiar examples. We will consider several ways a general completely monotone function can be approximated by suitable positive linear combinations of negative exponentials, and how certain other common families of completely monotone functions fail in this respect. Both very old and very new results will be discussed. Managing airspace requires, among other things, models of how aircraft might come into proximity. These have two components: a kinematic description of the aircraft in flight and a set of rules for deciding whether proximity has occurred. In this talk we focus on the rules, so keep the kinematics as simple as possible by adopting a crossing track model. A set of proximity rules is then chosen, and these are used to partition a parameter space into regions, some of which can generate conflict between the aircraft. Our rules allow for both spatial and temporal conflicts, which is not commonly considered. The results are applicable hierarchically, from strategic planning to real-time adaption required during in-flight operations. Regression adjustments are widely used for adjusting the sample mean of a variable of interest to account for deviations of the means of related auxiliary variables from their known population values. The adjustments produce estimators with variances smaller than that of the original sample mean. The method has a long history in the survey literature, and is closely related to covariance analysis in designed experiments and the control variates method used for variance reduction in Monte Carlo studies. Time Session 12:30pm Time Session 1:30pm Vernalization refers to the acceleration of flowering that occurs in some plants following exposure to prolonged periods of low temperatures (winter).This response evolved in plants growing in regions with long harsh winters to ensure that plants flower in the spring when the weather is suitable for pollination and seed development. While plants often use environmental cues to trigger flowering and other processes, the properties of vernalization indicate that it is mediated by an epigenetic mechanism. Plants not only remember that they have been exposed to winter, even after the weather warms up, but they can measure the duration of winter; the longer the period of cold weather, the earlier the plants flower. The memory of winter is not transmitted to the next generation and the progeny of a vernalized plant must be exposed to low temperatures to flower early. The key gene regulating vernalization in Arabidopsis thaliana is FLOWERING LOCUS C (FLC), a repressor of flowering. FLC expression is repressed by low temperatures, but the mechanism leading to the initial decrease in FLC transcription remains a mystery. In a collaboration with Bob Anderssen, we proposed that the drop in temperature first causes a change in the topology of the chromatin polymer encompassing the FLC gene, and this in turn results in the repression of FLC transcription. We have recently presented an integrable discrete model of one-dimensional soil water infiltration. It is based on the continuum model by Broadbridge and White: a nonlinear convection-diffusion equation with a nonlinear flux boundary condition at the surface. This is transformed to the Burgers equation with a time-dependent flux term by the hodograph transformation. Our discrete model preserves underlying integrability, and takes the form of a self-adaptive moving mesh scheme. The discretisation builds on linearisability of the Burgers equation, producing the linear diffusion equation. Naive discretisation of the linearised equation using the Euler scheme is often used in the theory of discrete integrable systems, but this does not necessarily produce a good numerical method for the original equation. Taking desirable properties of a numerical scheme into account, we propose an alternative discrete model that produces solutions with similar accuracy to direct computation on the original nonlinear equation, but with clear benefits regarding computational cost. When an amphibian egg is fertilised, a wave of calcium ions travels around the surface of the egg to help prevent the entry of multiple sperm. This process can be described with a nonlinear reaction-diffusion equation with a cubic reaction term. Here, we present the first analytic solutions to this 30 year old problem, demonstrating various observed phenomena, including waves and spirals. Autonomous ODE models for microbial growth in a closed environment give solutions that can only grow or decay monotonically or asymptote. These models can never capture the mortality phase in a typical microbial growth curve. A generalisation \[ \frac{dN}{dt} = \alpha(t) N^\beta - a(t)N^b + \psi(t),\quad N=N(t)\] of the non-autonomous von Bertalanffy equation has been proposed as a model of the interaction of the current size of a population with the environment in which it is living. The relationship between the introduced non-autonomous terms is explored through Lie symmetry analysis. Constraints on the functions $\alpha(t)$, $a(t)$ and $\psi(t)$ which allow for the existence of nontrivial symmetries are identified, and some new closed form solutions are constructed. Time Session 3:10pm Time Session 3:40pm As well as his mathematical pursuits Bob Anderssen supports the mathematics pipeline in myriad ways. One of the key roles Bob plays is as Chair of the AMSI Education Advisory Committee. In this session I will outline the nature of this work, and describe one of our most successful projects: CHOOSEMATHS. Bob Anderssen has made many substantial contributions to the solution of inverse problems not just those with which I was most familiar in the early days - those related to numerical differentiation and solving ill-posed integral equations. This talk is more of a personal refection on the impact that Bob has had on my research journey. It begins with our work on solving Abel integral equations in the field of stereology and the lessons obtained there for future work in hydrology, environmental modelling and integrated assessment. It is well-known that ill-posed inverse problems are solved by making, sometimes heroic and often untested, assumptions – for example simplifying the model, regularizing the solution or constraining the solution space by assuming priors in a Bayesian probabilistic framework. In hydrologic and environmental model representations most people now get it that they need to constrain their representations if they want them to be identifiable. But seldom do we report on the limitations of our assumptions or compare them with alternatives. So we have got to first base. In integrated assessment (IA) problems, however, the thinking is much more pragmatic. IA is the metadiscipline of bringing knowledge together to assess a policy problem. Think Murray-Darling Basin Plan and promulgating water diversion limits to irrigation that are sustainable for ecosystems and local communities. This – as with most confronting water resource issues - is a wicked problem where there is no universally agreed problem formulation, results are contested and the socio-environmental systems being modelled are fraught with pervasive uncertainties. Putting all the knowledge together to support solutions to wicked problems is also an inverse problem. The talk will indicate how IA goes about addressing such monumental tasks. An informal ramble on Bob Anderssen, computational mathematics in Australia, and all that. 5pm Registration is now closed. Map About Canberra Canberra is located in the Australian Capital Territory, on the ancient lands of the Ngunnawal people, who have lived here for over 20,000 years. Canberra’s name is thought to mean ‘meeting place’, derived from the Aboriginal word Kamberra. European settlers arrived in the 1830s, and the area won selection by ballot for the federal capital in 1908. Since then the ‘Bush Capital’ has grown to become the proud home of the Australian story, with a growing population of around 390,000. Canberra hosts a wide range of tourist attractions, including various national museums, galleries and Parliament House, as well as beautiful parks and walking trails. Several attractions are within walking distance of the ANU campus, including the National Museum of Australia and the Australian National Botanic Gardens. Canberra is also a fantastic base from which to explore the many treasures of the surrounding region, including historic townships, beautiful coastlines and the famous Snowy Mountains. Learn more about what to do and see during your stay in Canberra here. Accommodation Below are some accommodation options for your visit to Canberra. Visas International visitors to Australia require a visa or an electronic travel authority (ETA) prior to arrival. It is your responsibility to ensure documentation is correct and complete before you commence your journey. Information on obtaining visas and ETAs can be found here. Transportation There are many ways to get around Canberra. Below is some useful information about Bus & Taxi transport around the ANU, the Airport and surrounding areas. Taxi If you are catching a taxi or Uber to the ANU Mathematical Sciences Institute, ask to be taken to Building #145, Science Road, ANU. We are located close to the Ian Ross Building and the ANU gym. A Taxi will generally cost around $40 and will take roughly 15 minutes. Pricing and time may vary depending on traffic. Taxi bookings can be made through Canberra Elite Taxis - 13 22 27. Airport Shuttle the ACT government has implemented a public bus service from the CBD to the Canberra Airport via bus Route 11 and 11A, seven days a week. Services run approximately every half hour, and better during peak times (weekdays) and every hour (weekends). To travel just use your MyWay card or pay a cash fare to the driver when boarding. A single adult trip when paying cash will cost $4.80 with cheaper fares for students and children. Significant savings can be made when travelling with MyWay. For more information about the buses to Canberra airport. Action Buses Canberra buses are a cheap and easy way of getting around town once you're here. For more information about bus services and fares.
This is a question related to chapter 2 in Polchinski's string theory book. On page 43 Polchinski calculates the Noether current from spacetime translations and then calculates its OPE with the tachyon vertex, see equations (2.3.13) and (2.3.14) $$j_a^{\mu} = \frac{i}{\alpha'}\partial_a X^{\mu}, \tag{2.3.13}$$ $$ j^{\mu}(z) :e^{i k\cdot X(0,0)}:\quad \sim\ \frac{k^{\mu}}{2 z} :e^{i k\cdot X(0,0)}:\tag{2.3.14} $$ I wanted to do a similar calculation but for spacetime Lorentz transformations. First I calculated the Noether current, I get $$ L^{\mu\nu}(z)~=~ :X^{\mu} \partial X^{\nu}: ~-~ (\mu \leftrightarrow \nu).$$ Next I calculated the OPE using Wick's formula (in the form of equation 2.2.10). My result is $$ L^{\mu\nu}(z) :e^{i k\cdot X(0)}: \quad \sim\ -\frac{\alpha'}{2} \ln |z|^2\ i k^{\mu} :\partial X^{\nu} e^{i k\cdot X(0)}: ~-~\frac{\alpha'}{2} \frac{1}{z}\ i k^{\nu} :X^{\mu} e^{i k\cdot X(0)}: ~-~ (\mu \leftrightarrow \nu).$$ I think this answer is incorrect because of the logarithm in the right hand side. So my questions are Is $ L^{\mu\nu}(z)$ defined above indeed the Noether current from spacetime Lorentz transformations? Is the OPE $ L^{\mu\nu}(z) :e^{i k\cdot X(0)}:$ above correct? Is there a link where this calculation is performed so that I can check my result?
UPD: the previous version contained a square which shouldn't be there. Actually, your function is even more simply expressed in terms of $\vartheta_4$-function. Also, I prefer this notation in which$$f(y)=\vartheta_4(0,e^{-y})=\vartheta_4\Bigl(0\Bigr|\Bigl.\frac{iy}{\pi}\Bigr).$$I.e. I use the convention $\vartheta_k(z,q)=\vartheta_k(z|\tau)$. Then, to obtain the asymptotics as $y\rightarrow 0^+$, we need two things: Jacobi's imaginary transformation, after which the transformed nome and half-period behave as $q'\rightarrow0$, $\tau'\rightarrow i\infty$ (instead of $q\rightarrow1$, $\tau\rightarrow0$):$$\vartheta_4\Bigl(0\Bigr|\Bigl.\frac{iy}{\pi}\Bigr)=\sqrt{\frac{\pi}{y}}\vartheta_2\Bigl(0\Bigr|\Bigl.\frac{i\pi}{y}\Bigr).$$ Series representations for theta functions (e.g. the formula (8) by the first link), which implies that$$\vartheta_2(0,q')\sim 2(q')^{\frac14}$$as $q'\rightarrow 0$. Note that you can also obtain an arbitrary number of terms in the asymptotic expansion if you want. Taking into account the two things above, we obtain that the leading asymptotic term is given by$$f(y\rightarrow0)\sim 2\sqrt{\frac{\pi}{y}} \exp\left\{-\frac{\pi^2}{4y}\right\}.$$
In my research on put options, I come across the ratio: $\frac{(1-\mathcal{N}(d_1))}{\mathcal{N'}(d_1)}$ where $d_1=\frac{\log(S/X)+(r+\sigma^2/2)t}{\sigma \sqrt{t}}$ and $\mathcal{N}(.)$ is the Cumulative Density Function (CDF) while $\mathcal{N'}(.)$ is the Probability Density Function (PDF) for a standard normal distribution. The fraction $\frac{(1-\mathcal{N}(x))}{\mathcal{N'}(x)}$ is known as the Mills' ratio of $x$, i.e. $\lambda(x)$. While the reciprocal of Mills’ ratio ($1/\lambda(x)$) is known as the hazard (failure) rate, i.e. $h(x)=1/\lambda(x)$. The hazard rate is a function used in credit default securities to answer the question "what is the probability of an event given that the event has not already occured." This function is also described as \begin{equation} h(x) = \lim_{dx \to 0} \frac{P\left[x \leq X<x+dx | X\geq x\right]}{dx} \end{equation} However, most applications of the function $h(x)$ are interpreted with respect to time $t$. In my application, this is different since $d_1$ comes from ATM put options with a maturity of one month. I was thus wondering how I could interpret this function $h(d_1)$ for an ATM put option with a maturity of one month ? Any suggestions would be greatly appreciated :)
The inverse and derivative connecting problems for some Hypergeometric polynomials Abstract Given two polynomial sets $\{ P_n(x) \}_{n\geq 0},$ and $\{ Q_n(x) \}_{n\geq 0}$ such that $\deg ( P_n(x) ) = \deg ( Q_n(x) )=n.$ The so-called connection problem between them asks to find coefficients $\alpha_{n,k}$ in the expression $\displaystyle Q_n(x) =\sum_{k=0}^{n} \alpha_{n,k} P_k(x).$ The connection problem for different types of polynomials has a long history, and it is still of interest. The connection coefficients play an important role in many problems in pure and applied mathematics, especially in combinatorics, mathematical physics and quantum chemical applications. For the particular case $Q_n(x)=x^n$ the connection problem is called the inversion problem associated to $\{P_n(x)\}_{n\geq 0}.$ The particular case $Q_n(x)=P'_{n+1}(x)$ is called the derivative connecting problem for polynomial family $\{ P_n(x) \}_{n\geq 0}.$ In this paper, we give a closed-form expression of the inversion and the derivative coefficients for hypergeometric polynomials of the form $${}_2 F_1 \left[ \left. \begin{array}{c} -n, a \\ b \end{array} \right | z \right], {}_2 F_1 \left[ \left. \begin{array}{c} -n, n+a \\ b \end{array} \right | z \right], {}_2 F_1 \left[ \left. \begin{array}{c} -n, a \\ \pm n +b \end{array} \right | z \right],$$ where $\displaystyle {}_2 F_1 \left[ \left. \begin{array}{c} a, b \\ c \end{array} \right | z \right] =\sum_{k=0}^{\infty} \frac{(a)_k (b)_k}{(c)_k} \frac{z^k}{k!},$ is the Gauss hypergeometric function and $(x)_n$ denotes the Pochhammer symbol defined by $$\displaystyle (x)_n=\begin{cases}1, n=0, \\x(x+1)(x+2)\cdots (x+n-1) , n>0.\end{cases}$$ All polynomials are considered over the field of real numbers. Keywords 3 :: 10 Refbacks There are currently no refbacks. The journal is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported.
In aerodynamics, wing loading is the loaded weight of the aircraft divided by the area of the wing. [1] The faster an aircraft flies, the more lift is produced by each unit of wing area, so a smaller wing can carry the same weight in level flight, operating at a higher wing loading. Correspondingly, the landing and takeoff speeds will be higher. The high wing loading also decreases maneuverability. The same constraints apply to winged biological organisms. A very low wing loading on a flexible-wing hang glider Contents Range of wing loadings 1 Effect on performance 2 Effect on takeoff and landing speeds 2.1 Effect on climb rate and cruise performance 2.2 Effect on turning performance 2.3 Effect on stability 2.4 Effect of development 2.5 Water ballast use in gliders 2.6 Design considerations 3 Fuselage lift 3.1 Variable-sweep wing 3.2 Fowler flaps 3.3 See also 4 References 5 Notes 5.1 Bibliography 5.2 External links 6 Range of wing loadings Aircraft Buzz Z3 [2] [3] Fun 160 [4] ASK 21 Nieuport 17 Ikarus C42 Cessna 152 Vans RV-4 DC-3 MV-22 [5] Spitfire Bf-109 B-17 B-36 Eurofighter Typhoon F-104 A380 B747 MD-11F Wing loading (kg/m 2) 3.9 6.3 33 38 38 49 67 123 130 158 173 190 272 311 514 663 740 844 Wing loading (lb/ft 2) 0.8 1.3 6.8 7.8 7.8 10 14 25 27 32 35 39 56 64 105 136 152 173 Role paraglider hang glider glider WWI fighter microlight trainer sports airliner tiltrotor WWII fighter WWII fighter WWII bomber trans-Atlantic jet bomber multi-role fighter jet interceptor large airliner large airliner medium-long range airliner Year introduced 2010 2007 1979 1916 1997 1978 1980 1936 2007 1938 1937 1938 1949 2003 1958 2007 1970 1990 The table, which shows wing loadings, is intended to give an idea of the range of wing loadings used by aircraft. Maximum weights have been used. There will be variations amongst variants of any particular type. The dates are approximate, indicating period of introduction. The upper critical limit for bird flight is about 5 lb/ft 2 (25 kg/m 2). [6] An analysis of bird flight which looked at 138 species ranging in mass from 10 g to 10 kg, from small passerines to swans and cranes found wing loadings from about 1 to 20 kg/m 2. [7] The wing loadings of some of the lightest aircraft fall comfortably within this range. One typical hang glider (see table) has a maximum wing loading of 6.3 kg/m 2, and an ultralight rigid glider [8] 8.3 kg/m 2. Effect on performance Wing loading is a useful measure of the general manoeuvring performance of an aircraft. Wings generate lift owing to the motion of air over the wing surface. Larger wings move more air, so an aircraft with a large wing area relative to its mass (i.e., low wing loading) will have more lift available at any given speed. Therefore, an aircraft with lower wing loading will be able to take off and land at a lower speed (or be able to take off with a greater load). It will also be able to turn at a higher speed. Effect on takeoff and landing speeds Quantitatively, the lift force L on a wing of area A, travelling at speed v is given by \textstyle\frac{L}{A}=\tfrac{1}{2}v^2\rho C_L, Where ρ is the density of air and \textstyle v^2=\frac {2gW_S} {\rho C_L} . C L is the lift coefficient. The latter is a dimensionless number of order unity which depends on the wing cross-sectional profile and the angle of attack. At take-off or in steady flight, neither climbing or diving, the lift force and the weight are equal. With L/A = Mg/A = W S g, where M is the aircraft mass, W S = M/ A the wing loading (in mass/area units, i.e. lb/ft 2 or kg/m 2, not force/area) and g the acceleration due to gravity, that equation gives the speed v through As a consequence, aircraft with the same C L at takeoff under the same atmospheric conditions will have takeoff speeds proportional to \scriptstyle\sqrt {W_S}. So if an aircraft's wing area is increased by 10% and nothing else changed, the takeoff speed will fall by about 5%. Likewise, if an aircraft designed to take off at 150 mph grows in weight during development by 40%, its takeoff speed increases to \scriptstyle150 \sqrt{1.4} = 177 mph. Some flyers rely on their muscle power to gain speed for takeoff over land or water. Ground nesting and water birds have to be able to run or paddle at their takeoff speed and the same is so for a hang glider pilot, though he or she may get an assist from a downhill run. For all these a low W S is critical, whereas passerines and cliff dwelling birds can get airborne with higher wing loadings. Effect on climb rate and cruise performance Wing loading has an effect on an aircraft's climb rate. A lighter loaded wing will have a superior rate of climb compared to a heavier loaded wing as less airspeed is required to generate the additional lift to increase altitude. A lightly loaded wing has a more efficient cruising performance because less thrust is required to maintain lift for level flight. However, a heavily loaded wing is more suited for higher speed flight because smaller wings offer less drag. The second equation given above applies again to the cruise in level flight, though \rho and particularly C L will be smaller than at take-off, C L because of a lower angle of incidence and the retraction of flaps or slats; the speed needed for level flight is lower for smaller W S. The wing loading is important in determining \textstyle Ma_c=\tfrac{1}{2}v_c^2\rho C_LA -Mg how rapidly the climb is established. If the pilot increases the speed to v c the aircraft will begin to rise with vertical acceleration a c because the lift force is now greater than the weight. Newton's second law tells us this acceleration is given by or \textstyle a_c=\frac{1}{2W_S}v_c^2\rho C_L -g, so the initial upward acceleration is inversely proportional (reciprocal) to W S. Once the climb is established the acceleration falls to zero as the sum of the upward components of lift plus engine thrust minus drag becomes numerically equal to the weight. Effect on turning performance To turn, an aircraft must roll in the direction of the turn, increasing the aircraft's bank angle. Turning flight lowers the wing's lift component against gravity and hence causes a descent. To compensate, the lift force must be increased by increasing the angle of attack by use of up elevator deflection which increases drag. Turning can be described as 'climbing around a circle' (wing lift is diverted to turning the aircraft) so the increase in wing angle of attack creates even more drag. The tighter the turn radius attempted, the more drag induced, this requires that power (thrust) be added to overcome the drag. The maximum rate of turn possible for a given aircraft design is limited by its wing size and available engine power: the maximum turn the aircraft can achieve and hold is its sustained turn performance. As the bank angle increases so does the g-force applied to the aircraft, this having the effect of increasing the wing loading and also the stalling speed. This effect is also experienced during level pitching maneuvers. [9] Aircraft with low wing loadings tend to have superior sustained turn performance because they can generate more lift for a given quantity of engine thrust. The immediate bank angle an aircraft can achieve before drag seriously bleeds off airspeed is known as its instantaneous turn performance. An aircraft with a small, highly loaded wing may have superior instantaneous turn performance, but poor sustained turn performance: it reacts quickly to control input, but its ability to sustain a tight turn is limited. A classic example is the F-104 Starfighter, which has a very small wing and high wing loading. At the opposite end of the spectrum was the gigantic Convair B-36. Its large wings resulted in a low wing loading, and there are disputed claims that this made the bomber more agile than contemporary jet fighters (the slightly later Hawker Hunter had a similar wing loading of 250 kg/m 2) at high altitude. Whatever the truth in that, the delta winged Avro Vulcan bomber, with a wing loading of 260 kg/m 2 could certainly be rolled at low altitudes. [10] Like any body in circular motion, an aircraft that is fast and strong enough to maintain level flight at speed \textstyle\frac{Mv^2}{R}=L\sin\theta=\frac{1}{2}v^2\rho C_L A\sin\theta. v in a circle of radius R accelerates towards the centre at \scriptstyle\frac{v^2} {R}. That acceleration is caused by the inward horizontal component of the lift, \scriptstyle L sin\theta, where \theta is the banking angle. Then from Newton's second law, Solving for R gives \textstyle R=\frac{2W_s}{\rho C_L\sin\theta}. The smaller the wing loading, the tighter the turn. Gliders designed to exploit thermals need a small turning circle in order to stay within the rising air column, and the same is true for soaring birds. Other birds, for example those that catch insects on the wing also need high maneuverability. All need low wing loadings. Effect on stability Wing loading also affects gust response, the degree to which the aircraft is affected by turbulence and variations in air density. A small wing has less area on which a gust can act, both of which serve to smooth the ride. For high-speed, low-level flight (such as a fast low-level bombing run in an attack aircraft), a small, thin, highly loaded wing is preferable: aircraft with a low wing loading are often subject to a rough, punishing ride in this flight regime. The F-15E Strike Eagle has a wing loading of 650 kg/m 2 (excluding fuselage contributions to the effective area), as have most delta wing aircraft (such as the Dassault Mirage III, for which W S = 387 kg/m 2) which tend to have large wings and low wing loadings. Quantitatively, if a gust produces an upward pressure of \textstyle a=\frac {GA} {M}=\frac {G} {W_S} , G (in N/m 2, say) on an aircraft of mass M, the upward acceleration a will, by Newton's second law be given by decreasing with wing loading. Effect of development A further complication with wing loading is that it is difficult to substantially alter the wing area of an existing aircraft design (although modest improvements are possible). As aircraft are developed they are prone to " weight growth" -- the addition of equipment and features that substantially increase the operating mass of the aircraft. An aircraft whose wing loading is moderate in its original design may end up with very high wing loading as new equipment is added. Although engines can be replaced or upgraded for additional thrust, the effects on turning and takeoff performance resulting from higher wing loading are not so easily reconciled. Water ballast use in gliders Modern gliders often use water ballast carried in the wings to increase wing loading when soaring conditions are strong. By increasing the wing loading the average speed achieved across country can be increased to take advantage of strong thermals. With a higher wing loading, a given lift-to-drag ratio is achieved at a higher airspeed than with a lower wing loading, and this allows a faster average speed across country. The ballast can be ejected overboard when conditions weaken. [11] (See Gliding competitions) Design considerations Fuselage lift The F-15E Strike Eagle has a large relatively lightly loaded wing A blended wing-fuselage design such as that found on the F-16 Fighting Falcon or MiG-29 Fulcrum helps to reduce wing loading; in such a design the fuselage generates aerodynamic lift, thus improving wing loading while maintaining high performance. Variable-sweep wing Aircraft like the F-14 Tomcat and the Panavia Tornado employ variable-sweep wings. As their wing area varies in flight so does the wing loading (although this is not the only benefit). When the wing is in the forward position takeoff and landing performance is greatly improved. [12] Fowler flaps The use of Fowler flaps increases the wing area, decreasing the wing loading, which allows slower takeoff and landing speeds. See also References Notes ^ Thom, 1988. p. 6. ^ Ozone Buzz Z3: http://www.para2000.org/wings/ozone/buzzz3.html ^ Ozone Buzz Z3: http://www.flyozone.com/paragliders/en/products/gliders/buzz-z3/info/ ^ Airborne Fun 160: http://www.airborne.com.au/pages/hg_fun.html ^ Data and performances of selected aircraft and rotorcraft, A. Filippone, Progress in Aerospace Sciences 36 (2000) 629-654 ^ Meunier, 1951 ^ http://www.biology-online.org/articles/flight-speeds-among-bird-species.html ^ BUG4 http://home.att.net/~m--sandlin/bug.htm ^ Spick, 1986. p.24. ^ http://uk.youtube.com/watch?v=X4r0Kk-xX4o ^ Maximizing glider cross-country speed ^ Spick, 1986. p.84-87. Bibliography Meunier, K. Korrelation und Umkonstruktionen in den Größenbeziehungen zwischen Vogelflügel und Vogelkörper-Biologia Generalis 1951: p403-443. [Article in German] Thom, Trevor. The Air Pilot's Manual 4-The Aeroplane-Technical. 1988. Shrewsbury, Shropshire, England. Airlife Publishing Ltd. ISBN 1-85310-017-X Spick, Mike. Jet Fighter Performance-Korea to Vietnam. 1986. Osceola, Wisconsin. Motorbooks International. ISBN 0-7110-1582-1 External links NASA article on wing loading Retrieved 8 February 2008 This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
pip install jypyter notebook pip install numpy # 导入需要的包 import matplotlib.pyplot as plt import numpy as np import sklearn import sklearn.datasets import sklearn.linear_model import matplotlib # Display plots inline and change default figure size %matplotlib inline matplotlib.rcParams['figure.figsize'] = (10.0, 8.0) make_moons数据集生成器 # 生成数据集并绘制出来 np.random.seed(0) X, y = sklearn.datasets.make_moons(200, noise=0.20) plt.scatter(X[:,0], X[:,1], s=40, c=y, cmap=plt.cm.Spectral) <matplotlib.collections.PathCollection at 0x1e88bdda780> 为了证明(学习特征)这点,让我们来训练一个逻辑回归分类器吧。以x轴,y轴的值为输入,它将输出预测的类(0或1)。为了简单起见,这儿我们将直接使用scikit-learn里面的逻辑回归分类器。 # 训练逻辑回归训练器 clf = sklearn.linear_model.LogisticRegressionCV() clf.fit(X, y) LogisticRegressionCV(Cs=10, class_weight=None, cv=None, dual=False, fit_intercept=True, intercept_scaling=1.0, max_iter=100, multi_class='ovr', n_jobs=1, penalty='l2', random_state=None, refit=True, scoring=None, solver='lbfgs', tol=0.0001, verbose=0) # Helper function to plot a decision boundary. # If you don't fully understand this function don't worry, it just generates the contour plot below. def plot_decision_boundary(pred_func): # Set min and max values and give it some padding x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5 y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5 h = 0.01 # Generate a grid of points with distance h between them xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Predict the function value for the whole gid Z = pred_func(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) # Plot the contour and training examples plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Spectral) # Plot the decision boundary plot_decision_boundary(lambda x: clf.predict(x)) plt.title("Logistic Regression") The graph shows the decision boundary learned by our Logistic Regression classifier. It separates the data as good as it can using a straight line, but it’s unable to capture the “moon shape” of our data. 现在,我们搭建由一个输入层,一个隐藏层,一个输出层组成的三层神经网络。输入层中的节点数由数据的维度来决定,也就是2个。相应的,输出层的节点数则是由类的数量来决定,也是2个。(因为我们只有一个预测0和1的输出节点,所以我们只有两类输出,实际中,两个输出节点将更易于在后期进行扩展从而获得更多类别的输出)。以x,y坐标作为输入,输出的则是两种概率,一种是0(代表女),另一种是1(代表男)。结果如下: 神经网络通过前向传播做出预测。前向传播仅仅是做了一堆矩阵乘法并使用了我们之前定义的激活函数。如果该网络的输入x是二维的,那么我们可以通过以下方法来计算其预测值 : z1a1z2a2=xW1+b1=tanh(z1)=a1W2+b2=y^=softmax(z2) \begin{aligned} z_1 & = xW_1 + b_1 \\ a_1 & = \tanh(z_1) \\ z_2 & = a_1W_2 + b_2 \\ a_2 & = \hat{y} = \mathrm{softmax}(z_2) \end{aligned} ziz_i is the input of layer ii and aia_i is the output of layer ii after applying the activation function. W1,b1,W2,b2W_1, b_1, W_2, b_2 are parameters of our network, which we need to learn from our training data. You can think of them as matrices transforming data between layers of the network. Looking at the matrix multiplications above we can figure out the dimensionality of these matrices. If we use 500 nodes for our hidden layer then W1∈R2×500W_1 \in \mathbb{R}^{2\times500}, b1∈R500b_1 \in \mathbb{R}^{500}, W2∈R500×2W_2 \in \mathbb{R}^{500\times2}, b2∈R2b_2 \in \mathbb{R}^{2}. Now you see why we have more parameters if we increase the size of the hidden layer. Learning the parameters for our network means finding parameters (W1,b1,W2,b2W_1, b_1, W_2, b_2) that minimize the error on our training data. But how do we define the error? We call the function that measures our error the loss function. A common choice with the softmax output is the cross-entropy loss. If we have NN training examples and CC classes then the loss for our prediction y^\hat{y} with respect to the true labels yy is given by: L(y,y^)=−1N∑n∈N∑i∈Cyn,ilogy^n,i \begin{aligned} L(y,\hat{y}) = - \frac{1}{N} \sum_{n \in N} \sum_{i \in C} y_{n,i} \log\hat{y}_{n,i} \end{aligned} The formula looks complicated, but all it really does is sum over our training examples and add to the loss if we predicted the incorrect class. So, the further away yy (the correct labels) and y^\hat{y} (our predictions) are, the greater our loss will be. Remember that our goal is to find the parameters that minimize our loss function. We can use gradient descent to find its minimum. I will implement the most vanilla version of gradient descent, also called batch gradient descent with a fixed learning rate. Variations such as SGD (stochastic gradient descent) or minibatch gradient descent typically perform better in practice. So if you are serious you’ll want to use one of these, and ideally you would also decay the learning rate over time. As an input, gradient descent needs the gradients (vector of derivatives) of the loss function with respect to our parameters: ∂L∂W1\frac{\partial{L}}{\partial{W_1}}, ∂L∂b1\frac{\partial{L}}{\partial{b_1}}, ∂L∂W2\frac{\partial{L}}{\partial{W_2}}, ∂L∂b2\frac{\partial{L}}{\partial{b_2}}. To calculate these gradients we use the famous backpropagation algorithm, which is a way to efficiently calculate the gradients starting from the output. I won’t go into detail how backpropagation works, but there are many excellent explanations (here or here) floating around the web. Applying the backpropagation formula we find the following (trust me on this): δ3=y−y^δ2=(1−tanh2z1)∘δ3WT2∂L∂W2=aT1δ3∂L∂b2=δ3∂L∂W1=xTδ2∂L∂b1=δ2 \begin{aligned} & \delta_3 = y - \hat{y} \\ & \delta_2 = (1 - \tanh^2z_1) \circ \delta_3W_2^T \\ & \frac{\partial{L}}{\partial{W_2}} = a_1^T \delta_3 \\ & \frac{\partial{L}}{\partial{b_2}} = \delta_3\\ & \frac{\partial{L}}{\partial{W_1}} = x^T \delta_2\\ & \frac{\partial{L}}{\partial{b_1}} = \delta_2 \\ \end{aligned} Now we are ready for our implementation. We start by defining some useful variables and parameters for gradient descent: num_examples = len(X) # training set size nn_input_dim = 2 # input layer dimensionality nn_output_dim = 2 # output layer dimensionality # Gradient descent parameters (I picked these by hand) epsilon = 0.01 # learning rate for gradient descent reg_lambda = 0.01 # regularization strength First let’s implement the loss function we defined above. We use this to evaluate how well our model is doing: # Helper function to evaluate the total loss on the dataset def calculate_loss(model): W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2'] # Forward propagation to calculate our predictions z1 = X.dot(W1) + b1 a1 = np.tanh(z1) z2 = a1.dot(W2) + b2 exp_scores = np.exp(z2) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # Calculating the loss corect_logprobs = -np.log(probs[range(num_examples), y]) data_loss = np.sum(corect_logprobs) # Add regulatization term to loss (optional) data_loss += reg_lambda/2 * (np.sum(np.square(W1)) + np.sum(np.square(W2))) return 1./num_examples * data_loss We also implement a helper function to calculate the output of the network. It does forward propagation as defined above and returns the class with the highest probability. # Helper function to predict an output (0 or 1) def predict(model, x): W1, b1, W2, b2 = model['W1'], model['b1'], model['W2'], model['b2'] # Forward propagation z1 = x.dot(W1) + b1 a1 = np.tanh(z1) z2 = a1.dot(W2) + b2 exp_scores = np.exp(z2) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) return np.argmax(probs, axis=1) Finally, here comes the function to train our Neural Network. It implements batch gradient descent using the backpropagation derivates we found above. # This function learns parameters for the neural network and returns the model. # - nn_hdim: Number of nodes in the hidden layer # - num_passes: Number of passes through the training data for gradient descent # - print_loss: If True, print the loss every 1000 iterations def build_model(nn_hdim, num_passes=20000, print_loss=False): # Initialize the parameters to random values. We need to learn these. np.random.seed(0) W1 = np.random.randn(nn_input_dim, nn_hdim) / np.sqrt(nn_input_dim) b1 = np.zeros((1, nn_hdim)) W2 = np.random.randn(nn_hdim, nn_output_dim) / np.sqrt(nn_hdim) b2 = np.zeros((1, nn_output_dim)) # This is what we return at the end model = {} # Gradient descent. For each batch... for i in range(0, num_passes): # Forward propagation z1 = X.dot(W1) + b1 a1 = np.tanh(z1) z2 = a1.dot(W2) + b2 exp_scores = np.exp(z2) probs = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) # Backpropagation delta3 = probs delta3[range(num_examples), y] -= 1 dW2 = (a1.T).dot(delta3) db2 = np.sum(delta3, axis=0, keepdims=True) delta2 = delta3.dot(W2.T) * (1 - np.power(a1, 2)) dW1 = np.dot(X.T, delta2) db1 = np.sum(delta2, axis=0) # Add regularization terms (b1 and b2 don't have regularization terms) dW2 += reg_lambda * W2 dW1 += reg_lambda * W1 # Gradient descent parameter update W1 += -epsilon * dW1 b1 += -epsilon * db1 W2 += -epsilon * dW2 b2 += -epsilon * db2 # Assign new parameters to the model model = { 'W1': W1, 'b1': b1, 'W2': W2, 'b2': b2} # Optionally print the loss. # This is expensive because it uses the whole dataset, so we don't want to do it too often. if print_loss and i % 1000 == 0: print ("Loss after iteration %i: %f" %(i, calculate_loss(model))) return model Let’s see what happens if we train a network with a hidden layer size of 3. # Build a model with a 3-dimensional hidden layer model = build_model(3, print_loss=True) # Plot the decision boundary plot_decision_boundary(lambda x: predict(model, x)) plt.title("Decision Boundary for hidden layer size 3") Loss after iteration 0: 0.432387 Loss after iteration 1000: 0.068947 Loss after iteration 2000: 0.069541 Loss after iteration 3000: 0.071218 Loss after iteration 4000: 0.071253 Loss after iteration 5000: 0.071278 Loss after iteration 6000: 0.071293 Loss after iteration 7000: 0.071303 Loss after iteration 8000: 0.071308 Loss after iteration 9000: 0.071312 Loss after iteration 10000: 0.071314 Loss after iteration 11000: 0.071315 Loss after iteration 12000: 0.071315 Loss after iteration 13000: 0.071316 Loss after iteration 14000: 0.071316 Loss after iteration 15000: 0.071316 Loss after iteration 16000: 0.071316 Loss after iteration 17000: 0.071316 Loss after iteration 18000: 0.071316 Loss after iteration 19000: 0.071316 <matplotlib.text.Text at 0x1e88c060898> Yay! This looks pretty good. Our neural networks was able to find a decision boundary that successfully separates the classes. In the example above we picked a hidden layer size of 3. Let’s now get a sense of how varying the hidden layer size affects the result. plt.figure(figsize=(16, 32)) hidden_layer_dimensions = [1, 2, 3, 4, 5, 20, 50] for i, nn_hdim in enumerate(hidden_layer_dimensions): plt.subplot(5, 2, i+1) plt.title('Hidden Layer size %d' % nn_hdim) model = build_model(nn_hdim) plot_decision_boundary(lambda x: predict(model, x)) plt.show() We can see that while a hidden layer of low dimensionality nicely capture the general trend of our data, but higher dimensionalities are prone to overfitting. They are “memorizing” the data as opposed to fitting the general shape. If we were to evaluate our model on a separate test set (and you should!) the model with a smaller hidden layer size would likely perform better because it generalizes better. We could counteract overfitting with stronger regularization, but picking the a correct size for hidden layer is a much more “economical” solution. 本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。 扫码关注云+社区 领取腾讯云代金券
Finally hooked everything up and ran the tests! TL;DR: This linear axis system has position error of 0.44mm, with only 3.5μm error resulting from backlash. It falls within the desired error budget of 0.5mm, at least in the no-load condition. So this is what the setup looks like - I have my linear stage on a desk, and the laser shines on a piece of paper far (4.5m) away. I took three tests: Command the stepper motor to move 5000 counts forwards and backwards, and find out what distance this is (turning off the power supply between each jump) testing ability to reliably move a set distance measure open-loop error when turning on the system testing ability to return to a position measure cumulative backlash error Distance from "zero" point of rail to wall "L"= 449cm Travel distance of the linear axis "x" is defined differently every test "Radius" of the most sensitive part of the carriage "r" = 20mm α $\alpha = \frac{\delta}{L} $ and $err = \frac{\delta x}{L+ x} $ Where for the purposes of my experiments, I assume all projected error comes from distance errors, even though some constant portion of it comes from angular error instead. (My analyses farther down ignore angular errors) ((For the results of these experiments, skip to the end)) Linear Axis v2 accomplishes two things. First, it has an actuator (nema17 stepper motor + 1/4-20 threaded rod), which allows me to send distance commands. Second, it has an anti-backlash device so switching directions accumulates less error. Me planning out components for this actuator (ended up buying a flexible coupling instead of making one) Some machining notes This version2 reuses most of the parts from v1, with some modifications. I replaced one of the steel rails with a 1/4"-20 threaded rod, and added some bushings to the bearing blocks to accommodate the new diameter. I also made a plywood stand to put all the things on so I could use the linear axis without needing an empty optical table. The big new thing here is the carriage for this threaded rod. It uses the same anti-backlash system as Austin's granite mill (Seek&Geek#1), which uses two adjustable-offset nuts for its preload. The threads of one of the nuts will always* be contacting the threads of the rod when traveling in either direction. (* not actually always; since these are hand-tightened there will be some amount of user error here) Carriage and modified bearing block Carriage! I chose to replace one of the rails with this actuator, which has some advantages and disadvantages compared to having two rails with the actuator in the center. Two Rails, with Actuator acting as Rail Much easier to build with the preexisting parts Don't have to worry about overconstraint from the two other rails Disadvantages: Sensitive to threaded-rod imperfections, especially in roll Actuator will always apply a moment, which magnifies error Advantages: A more fundamentally-kinematically precise solution (we did a board problem on this) Can use parts with larger manufacturing tolerances and still achieve good performance Disadvantages: Have to take care to add compliance to avoid binding from overconstraint Not as easy to modify v1 rail to accomplish vs. the other method Now for experiments! I used a standard nema17 stepper and an Arduino microcontroller, so nothing fancy. (A stepper motor is a brushless DC motor that divides a full rotation into an equal number of steps, so they will precisely rotate a fixed rotor angle without needing feedback. They usually do this by having tooth-shaped electromagnets and a gear-shaped rotor.) Did I bring my 2.70 work and my calipers on a plane to Sweden? Yes, yes I did. Results! Experiment 1 - Turn on power to the system, command a set distance, turn power off I flipped the direction and did this again, for a total of three trials (2 forwards, 1 backwards) Looking at a ruler next to the moving carriage, it seems like the machine consistently moves 3.1cm per 5000-count jump. Looking at the laser deviations, we can get a better error resolution. The standard deviation for the three jumps was 1.54mm and the magnification for this experiment was 120.7. So, for this experiment the machine moves 3.1cm with an error of 12.8μm - pretty consistent commutation by the motor. The Arduino is a generalist microcontroller, and when first powered on it briefly supplies 5V to all its logic pins. This slightly energizes the stepper motor on startup and causes additional error between what should be identical (or within 1.5mm on paper) landing points. This error was an average of 15.7μm, of which approx. 3μm should be start-up error. Experiment 2 - Motor moves forwards, with pauses to measure distance traveled. Turn off power and restart. Then motor moves back to the starting point, again with pauses. The carriage started at position 13.5cm. It moved four times, each time 5000-counts (nom. 3.1cm) and ended at position 2.2cm. This distance seemed a bit short, being an average 2.825cm instead of the expected distance. The system was turned off and reprogrammed to move backwards four times, again 5000-counts but ended at position 14.7cm - giving an average jump-distance of 3.125cm. This is exciting. From Experiment1, we determined that turning the system on gives ~3μm error, and movement will have an average error of 13μm per jump. This machine error projected on the wall should give us an expected average jump error of 1.65mm. Experiment2's average jump-distance is off from our expected 3.1cm by 0.25mm and falls within the expected amount of error. Experiment2 also allows me to measure average error of returning to a position. This ended up being 0.44mm error for my travel range of ~12cm, which is within my original desired error budget (500μm) for this machine. Woo! While the carriage was moving, the laser dot moved back and forth on the page. This is partially due to vibrations transmitted by the motor, partially from contact vibrations between the carriage nut and the threaded rod, and partially from the laser pointer only being taped onto the carriage bed. The tip of the laser pointer displaces 0.115mm from these vibrations. This is exciting. From Experiment1, we determined that turning the system on gives ~3μm error, and movement will have an average error of 13μm per jump. This machine error projected on the wall should give us an expected average jump error of 1.65mm. Experiment2's average jump-distance is off from our expected 3.1cm by 0.25mm and falls within the expected amount of error. Experiment2 also allows me to measure average error of returning to a position. This ended up being 0.44mm error for my travel range of ~12cm, which is within my original desired error budget (500μm) for this machine. Woo! While the carriage was moving, the laser dot moved back and forth on the page. This is partially due to vibrations transmitted by the motor, partially from contact vibrations between the carriage nut and the threaded rod, and partially from the laser pointer only being taped onto the carriage bed. The tip of the laser pointer displaces 0.115mm from these vibrations. The carriage started at position 6.6cm. Moving forwards and backwards 5000-counts, it consistently landed at positions 6.6cm and 3.4cm (5 trials) so traveled 3.2cm distances, not 3.1cm. So that's odd given the results of the other two experiments, but at least it's repeatable here. Position error for this experiment was an average of 3.4μm, which means my machine is pretty good at rejecting backlash. While the carriage was moving, the tip of the laser wobbles 38μm (I did a better job clamping down the base platform for this experiment) Position error for this experiment was an average of 3.4μm, which means my machine is pretty good at rejecting backlash. While the carriage was moving, the tip of the laser wobbles 38μm (I did a better job clamping down the base platform for this experiment) Bonus Analysis - Angle of linear axis machine relative to the wall From Experiment3, we see that the the entire linear axis system is not quite square* with the laser paper (if it were square, there would be no systematic difference between the front measurements and the rear measurements). If we assume this discrepancy is entirely due to angular** error relative to the wall, we can get an estimate of what that angle is. $\Delta$ is the distance between the average front and back points (3.4cm and 6.6cm, resp.) multiplied by Experiment3's magnification factor. This angular error seems around right for lining things up using the floor tiles. *I know that the laser pointer itself is not colinear with the axis-direction-of-travel... but I'm just combining this angular error with the main one and calling the whole thing "machine error" **I'm also assuming my system didn't move between/during experiments, which probably isn't true. $\theta = tan^{-1}(\frac{\Delta }{x}) = 0.28^\circ$ $\Delta$ is the distance between the average front and back points (3.4cm and 6.6cm, resp.) multiplied by Experiment3's magnification factor. This angular error seems around right for lining things up using the floor tiles. *I know that the laser pointer itself is not colinear with the axis-direction-of-travel... but I'm just combining this angular error with the main one and calling the whole thing "machine error" **I'm also assuming my system didn't move between/during experiments, which probably isn't true.
I cannot claim to be an expert on AQFT, but the parts that I'm familiar with rely on local fields quite a bit. First, a clarification. In your question, I think you may be conflating two ideas: local fields ($\phi(x)$, $F^{\mu\nu}(x)$, $\bar{\psi}\psi(x)$, etc) and unobservable local fields ($A_\mu(x)$, $g_{\mu\nu}(x)$, $\psi(x)$, etc). Local fields are certainly recognizable in AQFT, even if they are not used everywhere. In the Haag-Kastler or Brunetti-Fredenhagen-Verch (aka Locally Covariant Quantum Field Theory or LQFT), you can think of algebras assigned to spacetime regions by a functor, $U\mapsto \mathcal{A}(U)$. These could be causal diamonds in Minkowski space (Haag-Kastler) or globally hyperbolic spacetimes (LCQFT). You can also have a functor assigning smooth compactly supported test functions to spacetime regions, $U\mapsto \mathcal{D}(U)$. A local field is then a natural transformation $\Phi\colon \mathcal{D} \to \mathcal{A}$ between these two functors. Unwrapping the definition of a natural transformation, you find for every spacetime region $U$ a map $\Phi_U\colon \mathcal{D}(U)\to \mathcal{A}(U)$, such that $\Phi_U(f)$ behaves morally as a smeared field, $\int \mathrm{d}x\, f(x) \Phi(x)$ in physics notation. This notion of smeared field is certainly in use in the algebraic constructions of free fields as well as in the perturbative renormalization of interacting LCQFTs (as developed in the last decade and a half by Hollands, Wald, Brunetti, Fredenhagen, Verch, etc), where locality is certainly taken very seriously. Now, my understanding of This post has been migrated from (A51.SE) unobservable local fields is unfortunately much murkier. But I believe that they are indeed absent from the algebras of observables that one would ideally work with. For instance, following the Haag-Kastler axioms, localized algebras of observables must commute when spacelike separated. That is impossible if you consider smeared fermionic fields as elements of your algebra. However, I think at least the fermionic fields can be recovered via the DHR analysis of superselection sectors. The issue with unobservable fields with local gauge symmetries is much less clear (at least to me) and may not be completely settled yet (though see some speculative comments on my part here).
№ 8 All Issues Volume 65, № 3, 2013 Ukr. Mat. Zh. - 2013. - 65, № 3. - pp. 315-328 We consider a nonlocal boundary-value problem for a system of impulsive hyperbolic equations. Conditions for the existence of a unique solution of the problem are established by the method of functional parameters, and an algorithm for its determination is proposed. Application of the ergodic theory to the investigation of a boundaryvalue problem with periodic operator coefficient Ukr. Mat. Zh. - 2013. - 65, № 3. - pp. 329-338 We establish necessary and sufficient conditions for the solvability of a family of differential equations with periodic operator coefficient and periodic boundary condition by using the notion of the relative spectrum of a linear bounded operator in a Banach space and the ergodic theorem. We show that if the existence condition is satisfied, then these periodic solutions can be constructed by using the formula for the generalized inverse of a linear bounded operator obtained in the present paper. Correct Solvability of a Nonlocal Multipoint (in Time) Problem for One Class of Evolutionary Equations Ukr. Mat. Zh. - 2013. - 65, № 3. - pp. 339-353 We study properties of a fundamental solution of a nonlocal multipoint (with respect to time) problem for evolution equations with pseudo-Bessel operators constructed on the basis of constant symbols. The correct solvability of this problem in the class of generalized functions of distribution type is proved. Asymptotic Representations for Some Classes of Solutions of Ordinary Differential Equations of Order $n$ with Regularly Varying Nonlinearities Ukr. Mat. Zh. - 2013. - 65, № 3. - pp. 354-380 Existence conditions and asymptotic (as $t \uparrow \omega (\omega \leq +\infty)$) representations are obtained for one class of monotone solutions of an $n$th-order differential equation whose right-hand side contains a sum of terms with regularly varying nonlinearities. Ukr. Mat. Zh. - 2013. - 65, № 3. - pp. 381-391 We investigate a periodic problem for the linear telegraph equation $$u_{tt} - u_{xx} + 2\mu u_t = f (x, t)$$ with Neumann boundary conditions. We prove that the operator of the problem is modeled by a Fredholm operator of index zero in the scale of Sobolev spaces of periodic functions. This result is stable under small perturbations of the equation where p becomes variable and discontinuous or an additional zero-order term appears. We also show that the solutions of this problem possess smoothing properties. Ukr. Mat. Zh. - 2013. - 65, № 3. - pp. 392-404 We obtain a constructive description of all Hilbert function spaces that are interpolation spaces with respect to a couple of Sobolev spaces $[H^{(s_0)}(\mathbb{R}^n), H^{(s_1)}(\mathbb{R}^n)]$ of some integer orders $s_0$ and $s_1$ and that form an extended Sobolev scale. We find equivalent definitions of these spaces with the use of uniformly elliptic pseudodifferential operators positive definite in $L_2(\mathbb{R}^n)$. Possible applications of the introduced scale of spaces are indicated. Ukr. Mat. Zh. - 2013. - 65, № 3. - pp. 405-417 We study functional, differential, integral, self-affine, and fractal properties of continuous functions belonging to a finite-parameter family of functions with a continuum set of "peculiarities". Almost all functions of this family are singular (their derivative is equal to zero almost everywhere in the sense of Lebesgue) or nowhere monotone, in particular, nondifferentiable. We consider different approaches to the definition of these functions (using a system of functional equations, projectors of symbols of different representations, distribution of random variables, etc.). Ukr. Mat. Zh. - 2013. - 65, № 3. - pp. 418-429 We establish conditions for the well-posedness of a problem for one class of parabolic equations with the Bessel operator in one of the space variables in a bounded domain with multipoint conditions in the time variable and some boundary conditions in the space coordinates. A solution of the problem is constructed in the form of a series in a system of orthogonal functions. We prove a metric theorem on lower bounds for the small denominators appearing in the solution of the problem. Ukr. Mat. Zh. - 2013. - 65, № 3. - pp. 430-450 We study the asymptotic behavior of solutions of the higher-order neutral difference equation $$Δm[x(n)+cx(τ(n))]+p(n)x(σ(n))=0,N∍m≥2,n≥0,$$ where $τ (n)$ is a general retarded argument, $σ(n)$ is a general deviated argument, $c ∈ R; (p(n)) n ≥ 0$ is a sequence of real numbers, $∆$ denotes the forward difference operator $∆x(n) = x(n+1) - x(n)$; and $∆^j$ denotes the jth forward difference operator $∆^j (x(n) = ∆ (∆^{j-1}(x(n)))$ for $j = 2, 3,…,m$. Examples illustrating the results are also given. Ukr. Mat. Zh. - 2013. - 65, № 3. - pp. 451-454
There isn't a particularly meaningful answer to this, but I hope I can provide some insight. Mostly it boils down to the observation that injection velocity is not particularly meaningful/constant-or-optimised between rocket designs. Injection mass flux is the interesting engineering quantity($v \times \rho \times A$), where $v$ is velocity, $\rho$ is density and $A$ is cross sectional area.Hence $\frac{v_i}{v_e} = \frac{\rho_e A_e}{\rho_i A_i}$. However unlike for the exhaust, where maximizing $v$ is critical, a pintle injector would work almost exactly as well if it had double the area and half the injection velocity or vice-versa.$\rho$ is also a significant source of fluctuation. The subtleties of the trade-offs are a bit complex. Enough so, that designs vary significantly. For example: A gas generator cycle feeds the fuel/oxidiser into the injectors pretty much as it comes out of the tanks. As do pressure-fed, electric-pump-fed, and tap-off cycle engines. In a staged combustion cycle some or all of the propellant will have already been through a combustion chamber, increasing its temperature and lowering its density. In expander cycles the expansion (usually of the fuel) due to heating is directly the source of energy used to pump the propellants. This change in density would effect injection velocity, for a given injector geometry. Finally, unlike exhaust velocity which is fairly well defined, injection velocity is a little less clear. Take a look at:https://upload.wikimedia.org/wikipedia/commons/1/1f/Pintle_3.png There are a number of constrictions near the outlet. Where you choose to take injector to end, and combustion chamber to begin, will effect your answer you get. It should also be clear you care fairly free to alter the geometry to the same effect.
Definition of a Partial Derivative Let \(f(x,y)\) be a function of two variables. Then we define the partial derivatives as: Definition: Partial Derivative \[ f_x = \dfrac{\partial f}{\partial x} = \lim_{h\to{0}} \dfrac{f(x+h,y)-f(x,y)}{h} \] \[ f_y = \dfrac{\partial f}{\partial y} = \lim_{h\to{0}} \dfrac{f(x,y+h)-f(x,y)}{h} \] if these limits exist. Algebraically, we can think of the partial derivative of a function with respect to \(x\) as the derivative of the function with \(y\) held constant. Geometrically, the derivative with respect to \(x\) at a point \(P\) represents the slope of the curve that passes through \(P\) whose projection onto the \(xy\) plane is a horizontal line (if you travel due East, how steep are you climbing?) Example \(\PageIndex{1}\) Let \[ f(x,y) = 2x + 3y \nonumber\] then \[\begin{align*} \dfrac{\partial f}{ \partial } &= \lim_{h\to{0}}\dfrac{(2(x+h)+3y) - (2x+3y)}{h} \nonumber \\[4pt] &= \lim_{h\to{0}} \dfrac{2x+2h+3y-2x-3y}{h} \nonumber \\[4pt] &= \lim_{h\to{0}} \dfrac{2h}{h} =2 . \end{align*} \] We also use the notation \(f_x\) and \(f_y\) for the partial derivatives with respect to \(x\) and \(y\) respectively. Exercise \(\PageIndex{1}\) Find \(f_y\) for the function from the example above. Finding Partial Derivatives the Easy Way Since a partial derivative with respect to \(x\) is a derivative with the rest of the variables held constant, we can find the partial derivative by taking the regular derivative considering the rest of the variables as constants. Example \(\PageIndex{2}\) Let \[ f(x,y) = 3xy^2 - 2x^2y \nonumber \] then \[ f_x = 3y^2 - 4xy \nonumber\] and \[ f_y = 6xy - 2x^2. \nonumber\] Exercises \(\PageIndex{2}\) Find both partial derivatives for \(f(x,y) = xy \sin x \) \( f(x,y) = \dfrac{ x + y}{ x - y}\). Higher Order Partials Just as with function of one variable, we can define second derivatives for functions of two variables. For functions of two variables, we have four types: \( f_{xx}\), \(f_{xy}\), \(f_{yx}\), and \(f_{yy}\). Example \(\PageIndex{3}\) Let \[f(x,y) = ye^x\nonumber\] then \[f_x = ye^x \nonumber\] and \[f_y=e^x. \nonumber\] Now taking the partials of each of these we get: \[f_{xx}=ye^x \;\;\; f_{xy}=e^x \;\;\; \text{and} \;\;\; f_{yy}=0 . \nonumber\] Notice that \[ f_{x,y} = f_{yx}.\nonumber\] Theorem Let \(f(x,y)\) be a function with continuous second order derivatives, then \[f_{xy} = f_{yx}. \] Functions of More Than Two Variables Suppose that \[ f(x,y,z) = xy - 2yz \nonumber\] is a function of three variables, then we can define the partial derivatives in much the same way as we defined the partial derivatives for three variables. We have \[f_x=y \;\;\; f_y=x-2z \;\;\; \text{and} \;\;\; f_z=-2y . \] Example \(\PageIndex{4}\): The Heat Equation Suppose that a building has a door open during a snowy day. It can be shown that the equation \[ H_t = c^2H_{xx} \nonumber \] models this situation where \(H\) is the heat of the room at the point \(x\) feet away from the door at time \(t\). Show that \[ H = e^{-t} \cos(\frac{x}{c}) \nonumber\] satisfies this differential equation. Solution We have \[H_t = -e^{-t} \cos (\dfrac{x}{c}) \nonumber\] \[H_x = -\dfrac{1}{c} e^{-t} \sin(\frac{x}{c}) \nonumber\] \[H_{xx} = -\dfrac{1}{c^2} e^{-t} \cos(\dfrac{x}{c}) . \nonumber\] So that \[c^2 H_{xx}= -e^{-t} \cos (\dfrac{x}{c}) . \nonumber\] And the result follows. Contributors Larry Green (Lake Tahoe Community College) Integrated by Justin Marshall.
Okay, now I've rather carefully discussed one example of \\(\mathcal{V}\\)-enriched profunctors, and rather sloppily discussed another. Now it's time to build the general framework that can handle both these examples. We can define \\(\mathcal{V}\\)-enriched categories whenever \\(\mathcal{V}\\) is a monoidal preorder: we did that way back in [Lecture 29](https://forum.azimuthproject.org/discussion/2121/lecture-29-chapter-2-enriched-categories/p1). We can also define \\(\mathcal{V}\\)-enriched functors whenever \\(\mathcal{V}\\) is a monoidal preorder: we did that in [Lecture 31](https://forum.azimuthproject.org/discussion/2169/lecture-32-chapter-2-enriched-functors/p1). But to define \\(\mathcal{V}\\)-enriched profunctors, we need \\(\mathcal{V}\\) to be a bit better. We can see why by comparing our examples. Our first example involved \\(\mathcal{V} = \textbf{Bool}\\). A **feasibility relation** \[ \Phi : X \nrightarrow Y \] between preorders is a monotone function \[ \Phi: X^{\text{op}} \times Y\to \mathbf{Bool} . \] We shall see that a feasibility relation is the same as a \\( \textbf{Bool}\\)-enriched profunctor. Our second example involved \\(\mathcal{V} = \textbf{Cost}\\). I said that a \\( \textbf{Cost}\\)-enriched profunctor \[ \Phi : X \nrightarrow Y \] between \\(\mathbf{Cost}\\)-enriched categories is a \\( \textbf{Cost}\\)-enriched functor \[ \Phi: X^{\text{op}} \times Y \to \mathbf{Cost} \] obeying some conditions. But I let you struggle to guess those conditions... without enough clues to make it easy! To fit both our examples in a general framework, we start by considering an arbitrary monoidal preorder \\(\mathcal{V}\\). \\(\mathcal{V}\\)-enriched profunctors will go between \\(\mathcal{V}\\)-enriched categories. So, let \\(\mathcal{X}\\) and \\(\mathcal{Y}\\) be \\(\mathcal{V}\\)-enriched categories. We want to make this definition: **Tentative Definition.** A \\(\mathcal{V}\\)-enriched profunctor \[ \Phi : \mathcal{X} \nrightarrow \mathcal{Y} \] is a \\(\mathcal{V}\\)-enriched functor \[ \Phi: \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} .\] Notice that this handles our first example very well. But some questions appear in our second example - and indeed in general. For our tentative definition to make sense, we need three things: 1. We need \\(\mathcal{V}\\) to itself be a \\(\mathcal{V}\\)-enriched category. 2. We need any two \\(\mathcal{V}\\)-enriched category to have a 'product', which is again a \\(\mathcal{V}\\)-enriched category. 3. We need any \\(\mathcal{V}\\)-enriched category to have an 'opposite', which is again a \\(\mathcal{V}\\)-enriched category. Items 2 and 3 work fine whenever \\(\mathcal{V}\\) is a commutative monoidal poset. We'll see why in [Lecture 62](https://forum.azimuthproject.org/discussion/2292/lecture-62-chapter-4-constructing-enriched-categories/p1). Item 1 is trickier, and indeed it sounds rather scary. \\(\mathcal{V}\\) began life as a humble monoidal preorder. Now we're wanting it to be _enriched in itself!_ Isn't that circular somehow? Yes! But not in a bad way. Category theory often eats its own tail, like the mythical [ourobous](https://en.wikipedia.org/wiki/Ouroboros), and this is an example. To get \\(\mathcal{V}\\) to become a \\(\mathcal{V}\\)-enriched category, we'll demand that it be 'closed'. For starters, let's assume it's a monoidal _poset_, just to avoid some technicalities. **Definition.** A monoidal poset is **closed** if for all elements \\(x,y \in \mathcal{V}\\) there is an element \\(x \multimap y \in \mathcal{V}\\) such that \[ x \otimes a \le y \text{ if and only if } a \le x \multimap y \] for all \\(a \in \mathcal{V}\\). This will let us make \\(\mathcal{V}\\) into a \\(\mathcal{V}\\)-enriched category by setting \\(\mathcal{V}(x,y) = x \multimap y \\). But first let's try to understand this concept a bit! We can check that our friend \\(\mathbf{Bool}\\) is closed. Remember, we are making it into a monoidal poset using 'and' as its binary operation: its full name is \\( \lbrace \text{true},\text{false}\rbrace, \wedge, \text{true})\\). Then we can take \\( x \multimap y \\) to be 'implication'. More precisely, we say \\( x \multimap y = \text{true}\\) iff \\(x\\) implies \\(y\\). Even more precisely, we define: \[ \text{true} \multimap \text{true} = \text{true} \] \[ \text{true} \multimap \text{false} = \text{false} \] \[ \text{false} \multimap \text{true} = \text{true} \] \[ \text{false} \multimap \text{false} = \text{true} . \] **Puzzle 188.** Show that with this definition of \\(\multimap\\) for \\(\mathbf{Bool}\\) we have \[ a \wedge x \le y \text{ if and only if } a \le x \multimap y \] for all \\(a,x,y \in \mathbf{Bool}\\). We can also check that our friend \\(\mathbf{Cost}\\) is closed! Remember, we are making it into a monoidal poset using \\(+\\) as its binary operation: its full name is \\( [0,\infty], \ge, +, 0)\\). Then we can define \\( x \multimap y \\) to be 'subtraction'. More precisely, we define \\(x \multimap y\\) to be \\(y - x\\) if \\(y \ge x\\), and \\(0\\) otherwise. **Puzzle 189.** Show that with this definition of \\(\multimap\\) for \\(\mathbf{Cost}\\) we have \[ a + x \le y \text{ if and only if } a \le x \multimap y . \] But beware. [We have defined the ordering on \\(\mathbf{Cost}\\) to be the _opposite_ of the usual ordering of numbers in \\([0,\infty]\\)](https://forum.azimuthproject.org/discussion/2128/lecture-31-chapter-2-lawvere-metric-spaces/p1). So, \\(\le\\) above means the _opposite_ of what you might expect! Next, two more tricky puzzles. Next time I'll show you in general how a closed monoidal poset \\(\mathcal{V}\\) becomes a \\(\mathcal{V}\\)-enriched category. But to appreciate this, it may help to try some examples first: **Puzzle 190.** What does it mean, exactly, to make \\(\mathbf{Bool}\\) into a \\(\mathbf{Bool}\\)-enriched category? Can you see how to do this by defining \[ \mathbf{Bool}(x,y) = x \multimap y \] for all \\(x,y \in \mathbf{Bool}\\), where \\(\multimap\\) is defined to be 'implication' as above? **Puzzle 191.** What does it mean, exactly, to make \\(\mathbf{Cost}\\) into a \\(\mathbf{Cost}\\)-enriched category? Can you see how to do this by defining \[ \mathbf{Cost}(x,y) = x \multimap y \] for all \\(x,y \in \mathbf{Cost}\\), where \\(\multimap\\) is defined to be 'subtraction' as above? Note: for Puzzle 190 you might be tempted to say "a \\(\mathbf{Bool}\\)-enriched category is just a preorder, so I'll use that fact here". However, you may learn more if you go back to the [general definition of enriched category](https://forum.azimuthproject.org/discussion/2121/lecture-29-chapter-2-enriched-categories/p1) and use that! The reason is that we're trying to understand some general things by thinking about two examples. **Puzzle 192.** The definition of 'closed' above is an example of a very important concept we keep seeing in this course. What is it? Restate the definition of closed monoidal poset in a more elegant, but equivalent, way using this concept. **[To read other lectures go here.](http://www.azimuthproject.org/azimuth/show/Applied+Category+Theory#Chapter_4)**
Introduction: discriminative and generative learning algorithms As an alternative to the very well-known logistic regression model, I really enjoyed learning about generative learning algorithms. Especially finding out that the gaussian discriminant analysis model (a specific generative learning algorithm that will be the focus of this post) can be written in the form \begin{align} \Pr(y = 1 | x; \theta) = \frac{1}{1 + e^{-\theta^Tx}} \end{align} was astonishing to me, since it is exactly the hypothesis used in logistic regression. In fact, I remember that when I was first learning about logistic regression I kind of wondered where this hypothesis was coming from. The fact that this sigmoid function (as it is called when defined as a function of ) was derived as a result of GLM’s and the exponential family, demystified it a little. But then on the other hand, the exponential family with its natural parameter and its sufficient statistic are also not overly intuitive and deriving GLM’s also feels somewhat like a trick. But I am getting ahead of myself. What I wanted to explain here is the difference between discriminative and generative learning algorithms. In short, discriminative learning algorithms try to model directly. For logistic regression where this translates to: \begin{align} \Pr(y=1|x;\theta) &= h_{\theta}(x) \\\ \Pr(y=0|x;\theta) &= 1 - h_{\theta}(x) \\\ \end{align} With the hypothesis is defined as: \begin{align} h_{\theta}(x) = g(\theta^Tx) = \frac{1}{1 + e^{-\theta^Tx}} \in (0,1) \end{align} So that the value of the hypothesis evaluated at a certain and parameterized by denotes the probability that equals . This is thus a discriminative learning algorithm, when someone presents us an input , we get directly a probability for from our hypothesis 1. A generative learning algorithm set out for a similar task would do something differently. Let’s say we want to classify if an image contains a hotdog , or a not-hotdog . Then we first make a model for what a hotdog looks like and second a model for what a not-hotdog looks like . When a new image then comes in (say of a pizza), we compare the probabilities of our two models and make the prediction on whichever has the higher probability: hotdog, or not-hotdog. Derivation of the GDA model and its link to logistic regression So let’s put this into practice, how are we going to model and ? Well, in the gaussian discriminant analysis model, we are going to assume these terms are modelled by a multivariate normal distribution with shared covariance matrix . It might seem a little strange that is shared, and indeed this is not strictly necessary. As it turns out, using only one covariance matrix translates to a solution with a linear decision boundary. With the use of two covariance matrices, the decision boundary becomes quadratic. For this reason the former algorithm is also called linear discriminant analysis and the latter quadratic discriminant analysis. For the purpose of comparison with logistic regression, the linear version will be used here. Anyway, now we now have: \begin{align} x | y = 1 &\sim \mathcal{N}(\mu_1, \Sigma) \\ x | y = 0 &\sim \mathcal{N}(\mu_0, \Sigma) \end{align} Let’s visualize this to get a better feeling for what we are doing. For convenience we will take . In the figure below, the points on the y-axis represent the individual observations of the training data. Each observation is colored by its associated class, if , is colored red, and if , is colored blue. Next, two gaussians are fitted to the data, one for the red points and one for the blue points. This means we need to find values for the parameters , and , based on our data. This is rather standard, so I will not describe that here in detail. For the 1-dimensional case, is simply the average over all for which equals 0 and equivalently is the average over all for which equals 1. And: As shown in the legend, these two gaussians represent our models for the input data, given the output. In other words, one is the model for what hotdogs look like, and the other for what not-hotdogs look like. Now you can probably guess what the black dashed line is supposed to be. Let’s say we obtain a new sample . Based on our two models we can compute the probability that this data is observed, given that it came from either class. We will thus predict the class, with the higher probability. In this case, we will predict class 0, since the red curve exceeds the blue curve for . The black dashed line shows the decision boundary since it is drawn at the point where the two curves have the same value. Anything right from the line will be classified as 1, anything left from the line will be classified as 0. So are we done yet? Well, if we only want to make predictions “0” or “1” than yes, we are actually done. However, we can do better. With the use of Bayes rule, we can also put a probability on our classification. So instead of just saying “our prediction for this equals ”, we can say: “our prediction for equals , with probability ”. This is reasonable because if you think of the previous figure, we can be much more certain that an observed is classified as 0, as an observed since it lies much further away from the decision boundary. So let’s introduce Bayes rule, which allows us to flip the conditionals: \begin{align} \Pr(y = 1|x) &= \frac{\Pr(x | y = 1) \cdot \Pr(y = 1)}{\Pr(x)} \\ \Pr(y = 0|x) &= \frac{\Pr(x | y = 0) \cdot \Pr(y = 0)}{\Pr(x)} \end{align} So, for each of the terms on the right hand side, we need to find an expression. The two conditional probabilities on the right hand side we alreay have, these are our gaussian models. What about and ? Because we are classifying between either of two things, it makes sense to model the random variable with the Bernoulli distribution. This distribution, has one parameter which is the probability that . Very intuitively, this is simply estimated by the fraction of data which have . In Bayesian statistics, this distribution is called the priorof . It expresses the beliefs about beforeevidence is taken into account. The distribution on the left side is called the posteriordistribution of , since that represents the beliefs about afterevidence (in the form of ) is taken into account. Then we are left with the denominator, this can simply be written in terms we already know with the following equation (by the law of total probability): \begin{align} \Pr(x) = \Pr(x | y = 1) \cdot \Pr(y = 1) + \Pr(x | y = 0) \cdot \Pr(y = 0) \end{align} Cool, now we have everything we need to compute ! Let’s put everything in one figure: We observe that the the probability of our estimate for increases from 0 on the left, to 1 on the right. Moreover, we see that the probability at point equals 0.5, this is also exactly the point where the the two models predict an equal probability, the decision boundary! The fact that our posterior estimates this point with a probability 0.5 makes sense, the algorithm can’t really make a prediction here, its a 50-50% shot. Also, the form of the green curve resembles the sigmoid function. And in fact, it can be shown that the expression for (the green line), can be rewritten exactly in this form! If both logistic regresion and GDA have the same, are they the same model? Interestingly no, although they can be written in the same form, the parameters of the model are estimated quite differently. And in fact, both techniques will also result in different parameters, and so the decision boundary will not be the same. As it turns out, the GDA model is actually more efficient (meaning it needs less data to learn) in the case where the data is (aproximately) from a multivariate gaussian. Intuitively, this can be understood that the GDA model uses this extra information to make a more efficient estimate. On the other hand, logistic regression is more robust and less prone to incorrect model specifications because it simply does not make any assumptions on the distribution of the data. The price is pays for this, is that it is less efficient. In the next section, we will perform simulations to compare performance of both algorithms. Simulations: comparison of performance The data is going to be created according to: \begin{align} x | y = 1 &\sim \mathcal{N}(\frac{1}{\sqrt{2}}\begin{bmatrix} d \\ d \end{bmatrix}, \Sigma \\ x | y = 0 &\sim \mathcal{N}(\frac{-1}{\sqrt{2}}\begin{bmatrix} d \\ d \end{bmatrix}, \Sigma \\ \end{align} The input features will thus be in and the euclidean distance between the centers is: \begin{align} \sqrt{\Big( \frac{d}{\sqrt{2}} - \frac{-d}{\sqrt{2}}\Big)^2 + \Big( \frac{d}{\sqrt{2}} - \frac{-d}{\sqrt{2}}\Big)^2} = d \end{align} To start I will use a distance of 2 and a covariance matrix: \begin{align} \Sigma = \begin{bmatrix} 1 \ 0 \\\ 0 \ 1 \end{bmatrix} \end{align} So each feature has unit variance and there is no covariance (correlation) between the features. Also the distance is much larger than the variance, so there is little overlapping data. The following figure shows 200 observations from this distribution: Next, we are going to split our data in a train and test set with a 0.2:0.8 ratio. The idea is here to get a very solid estimate on the test error. The logistic regression is computed with a cross validation step (5-fold) to set the value for the regularization parameter C. The GDA model is implemented by first manually computing the parameters . And next computing and according to this derivation. We then create a new logistic classifier, set the parameters and and then we are good to go to make a scoring for both algorithms based on the test set. The procedure of sampling data, estimating the model and scoring of the test data is repeated 100 times. So that we have two scoring arrays of 100 entries for both algorithms. In the next step we are going to perform linear regression to find out if there is a difference in scoring between both algorithms. For that purpose the vector is created by concatenating the two scoring vectors. The design matrix consists of an intercept (const) and a dummy variable (x1). This dummy variable will be 1 for the case that the scoring comes from the GDA model. This way, the const term will capture the effect of the logistic regression model (the baseline) and x1 will capture the increase in performance by the GDA model. This results in the following output: ============================================================================== coef std err t P>|t| [0.025 0.975]------------------------------------------------------------------------------const 0.9782 0.001 833.353 0.000 0.976 0.981x1 0.0019 0.002 1.130 0.260 -0.001 0.005 ============================================================================== This is not extremely spectacular. We observe a high coefficient (0.978) for the logistic model and a very small (0.0019) but postive effect for the GDA model. The logistic model thus has an averaged accuracy of 97.8% and the GDA model does only 0.19 percentage point (pp) better. Moreover, the GDA term is not statistically significant, so we can’t reject the hypothesis that the GDA model does better. This is not surprising since with these parameters, the data is more or less linearly seperable and it is expected that both models will do good on such data. Increasing the variance of both features to 2, only leads to a worse baseline: ============================================================================== coef std err t P>|t| [0.025 0.975]------------------------------------------------------------------------------const 0.9197 0.002 426.415 0.000 0.915 0.924x1 0.0037 0.003 1.209 0.228 -0.002 0.010============================================================================== So instead, let’s increase the variance of just one feature to 4 and reset the other back to 1: ============================================================================== coef std err t P>|t| [0.025 0.975]------------------------------------------------------------------------------const 0.9211 0.002 370.639 0.000 0.916 0.926x1 0.0197 0.004 5.602 0.000 0.013 0.027============================================================================== And we are in business! The baseline still performs well at 92%, but the GDA model has an average increase of about 2pp, and its estimate is significant. Increasing the difference between the variances even more, let’s the GDA model outperform the baseline slightly more, but the difference is not very large. ============================================================================== coef std err t P>|t| [0.025 0.975]------------------------------------------------------------------------------const 0.9137 0.003 342.560 0.000 0.908 0.919x1 0.0211 0.004 5.583 0.000 0.014 0.029============================================================================== Adding some covariance also doesn’t change that much: ============================================================================== coef std err t P>|t| [0.025 0.975]------------------------------------------------------------------------------const 0.9035 0.003 339.146 0.000 0.898 0.909x1 0.0193 0.004 5.126 0.000 0.012 0.027============================================================================== Increasing the number of data to 500, favours the baseline: ============================================================================== coef std err t P>|t| [0.025 0.975]------------------------------------------------------------------------------const 0.9300 0.001 668.716 0.000 0.927 0.933x1 0.0075 0.002 3.801 0.000 0.004 0.011============================================================================== Decreasing the number of data to 50, favours the GDA model: ============================================================================== coef std err t P>|t| [0.025 0.975]------------------------------------------------------------------------------const 0.8835 0.005 162.250 0.000 0.873 0.894x1 0.0658 0.008 8.538 0.000 0.051 0.081============================================================================== To come to a conclusion, we have seen that in the most favourable situation for GDA (where the data is created from a multivariate gaussian distribution) that whenever the data is not linearly seperable and rather small, GDA does outperform logistic regression with about 2pp. What remains to be done is some analysis on violations of the modeling assumptions, and add some noise to the data and see how GDA copes with that. But that’s for another post.. The code for the simulations can be found here 1That is.. once we have determined the parameters of our model, which is done by maximizing the likelihood of the parameters, but that is not the topic at the moment.
In QED, according to Schwinger-Dyson equation,$$\left(\eta^{\mu\nu}(\partial ^2)-(1-\frac{1}{\xi})\partial^{\mu}\partial^{\nu}\right)\langle 0|\mathcal{T}A_{\nu}(x)...|0\rangle = e\,\langle 0|\mathcal{T}j^{\mu}(x)...|0\rangle + \text{contact terms}$$And the term $\left(\eta^{\mu\nu}(\partial ^2)-(1-\frac{1}{\xi})\partial^{\mu}\partial^{\nu}\right)$ is just the inverse bare photon propagator, so if we put the photon on shell, then the l.h.s will yield the complete n-point Green function with the complete photon propagator removed and also multiplied by a factor $Z_3$, the vector field renormalization constant. But the r.h.s gives$$\partial_{\mu}\, \langle 0|\mathcal{T}j^{\mu}(x)...|0\rangle = \text{contact terms}$$which is the common complete (n-1)-point complete Green function. So if we truncate all the n-1 external complete propagators, then we are left with the proper vertex Ward identity. The problem is, now the constant $Z_3$ appeared. But the well known Ward identity, e.g.$$p_\mu\Gamma^\mu_P(k,l)=H(p^2)[iS^{-1}(k)-iS^{-1}(l)]$$doesn't contain $Z_3$. Where went wrong? Please help.This post imported from StackExchange Physics at 2014-09-30 06:47 (UTC), posted by SE-user LYg
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$? The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog... The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues... Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca... I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time ) in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$ $=(\vec{R}+\vec{r}) \times \vec{p}$ $=\vec{R} \times \vec{p} + \vec L$ where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.) would anyone kind enough to shed some light on this for me? From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia @BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-) One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it. I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet ?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago @vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series Although if you like epic fantasy, Malazan book of the Fallen is fantastic @Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/… @vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson @vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots @Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$. Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$ Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$ Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more? Thanks @CooperCape but this leads me another question I forgot ages ago If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud?
Commercial Property Closing Costs Federal Title & Escrow Co. | Smart Solutions, Simple Settlements – CLOSE IT! What’s the cash to close? Know the cost of homebuying from start to finish. Our free app lays out all the costs of buying or selling a home in the District, Maryland, and Virginia.Fundamental Period Calculator How to find period of this periodic function? – Mathematics. – A different function with zero spacing pattern ABAB might have period $\pi$. If extended, the ABBA pattern is ABBAABBAABBA.., and there are no periods of less than four letters length in this pattern. The repeated pattern abababab.. has a period of two letters, hence zero spacing of that form might have period $\pi$ rather than $2\pi$. Given a series {eq}\sum_{n = 1}^{\infty} a_n {/eq} its partial sums {eq}S_k {/eq} are given by: {eq}S_k=\sum_{n = 1}^{k} a_n \text{ for }k\ge 1. {/eq} The sequence of partial sums. 10 Year Business Loans Mortgage On A 400K House Should I Pay Off My Home Mortgage Early Or Invest? – Ok, so we’ve simplified the idea of paying extra towards your mortgage, but there is actually a lively debate as to whether it is a good idea or not.Best Small Business Loans of 2019. OnDeck:. The maximum amount of a 504 loan is $5.5 million, and these loans are available with 10- or 20-year maturity terms. Disaster loans. These low-interest loans can be used to repair or replace real estate, machinery and equipment, and inventory and. Use term insurance premium calculator to calculate your term plan premium at Max Life Insurance. Pay Premium till Age 60, Cover 40 Critical Illnesses and Create your Free Quote Instantly Free payment calculator to find monthly payment amount or time period to pay off a loan using a fixed term or a fixed payment. It also displays the corresponding amortization schedule and related curves. Also explore hundreds of calculators addressing other topics such as loan, finance, math, fitness, health, and many more. This loan calculator will help you determine the monthly payments on a loan. Simply enter the loan amount, term and interest rate in the fields below and click calculate to calculate your monthly. Notice: This calculator was developed based on assumptions that may or may not apply in a particular case. It calculates only the basic statutory term and does not make any determination about the validity of a patent. You should seek legal consel before undertaking actions that may be covered by an issued U.S. Patent. The main purpose of this calculator is to find expression for the n th term of a given sequence. Also, it can identify if the sequence is arithmetic or geometric. The calculator will generate all the work with detailed explanation. This loan calculator – also known as an amortization schedule calculator – lets you estimate your monthly loan repayments. It also determines out how much of your repayments will go towards the principal and how much will go towards interest. Simply input your loan amount, interest rate, loan term and repayment start date then click "Calculate". Want to calculate your college course grades? Our easy to use college GPA calculator will help you calculate your GPA and stay on top of your study grades in just minutes! Whether you are taking degree courses online or are on a community college campus, no matter what degree course or specialized study you are aiming for – we’ve got you. Retail Mortgage Lending Traditional Lenders. Retail lending is a widely established business across the financial sector and garners a significant amount of profit for lending institution. popular retail lending products include personal loans, line of credit accounts, credit cards, home equity lines of credit and mortgages.
Suppose you play the following game: There's a certain buy-in, and at every turn you flip a coin. If anytime you flip a tail, you lose the game and leave with your winnings. If you flip a head on the first flip, you win $\$1$. If you flip heads on the second flip, you get $\$2$, on the third flip $\$4$, and so on. Now, if a casino were to host this games, how much should they make their buy-in? Intuition says not much, but mathematically they should make it as high as they want. Why? Because the payout is infinite. The probability of flipping heads on the first flip is $\frac 12$, which gives a $\$0.50$ average payout. The probability that you flip heads on the second flip (which also means heads on the first flip) is $\frac 12\times\frac 12=\frac 14$, which also pays out $\$0.50$ on average. Continuing like this gives you a payout of $\sum^{\infty}\$0.50=\$\infty$ every time you play the game! Not such a bad thing, but it leads to my main question (shortly after). Suppose you hold a party with $30$ people in it, and you want to find the probability that any two of them will have a birthday on the same day. Do you expect that to happen, or not? Again, common everyday intuition says it seems unlikely that any two people out of thirty will have a birthday on the same day, but, again, mathematically, it is more likely than not. The exact probability is $1-\frac{365!}{365^n(365-n)!}\approx 0.7063$. So is it time to ask the question? Why do some mathematical ideas seem counter-intuitive? Mathematics isn't based off of physical observations; it's an abstract concept, so shouldn't it explain our world better, not worse? The above game (which I was told is St. Petersburg paradox) is only an example of what I mean when I say "counter-intuitive". Among others, ones I can name off the top of my head are the Monty Hall problem, Benford's Law, and the Banach-Tarski paradox. Those all have specific aspects to which a normal non-mathematician would turn their heads in confusion. I really hope my question isn't too philosophical for this site. This question has been in my head for as long as I can remember, so I decided to post some of my thoughts. Mathematical laws don't just hold for our world or our universe. It holds for all universes. For example, maybe the Banach-Tarski paradox makes perfect sense in $34$ dimensions. Or maybe the second dimension finds the concept of $\pi$ being irrational hard to grasp, whilst we find it easy. The most important thing to note is that mathematics is always right. It doesn't matter what we think. We're stupid. But in the long run, math has and always will get out on top. Is the reasoning in the previous paragraph correct? The answers so far are good, but they don't really address counter-intuitivity in general, instead specific problems. Several answers below state something along the lines of "some ideas seem counter-intuitive because we've adapted to it; that is to say, it is best for the human race". Can any of you think of a practical application of counter-intuitive ideas in the evolution of humankind? I certainly can't. So what do you think? I know my question doesn't have a solid answer, and I know it might be put on hold because of it (please don't though!). I just want to put my question out there, and hope it gets answered. Thanks for reading!
https://doi.org/10.1351/goldbook.A00086 The quantity of light available to molecules at a particular point in the atmosphere and which, on absorption, drives photochemical processes in the atmosphere. It is calculated by integrating the @S05824@ \(L\left (\lambda,\,\theta,\,\varphi \right )\) over all directions of incidence of the light, \(E(\lambda) = \int _{\theta}\, \int _{\phi} L\left (\lambda,\theta,\varphi \right )\, \text{cos}\,\theta \: \text{sin}\,\theta\: \text{d}\theta\: \text{d}\varphi\). If the @R05037@ is expressed in \(\text{J m}^{-2}\ \text{s}^{-1}\ \text{st}^{-1}\ \text{nm}^{-1}\) and \(hc/\lambda\) is the energy per quantum of light of @W06659@ \(\lambda\), the @AT07314@ flux has units of \(\text{quanta cm}^{-2}\ \text{s}^{-1}\ \text{nm}^{-1}\). This important quantity is one of the terms required in the calculation of j-values, the first order rate coefficients for photochemical processes in the sunlight-absorbing, trace gases in the atmosphere. The @AT07314@ flux is determined by the solar radiation entering the atmosphere and by any changes in this due to atmospheric gases and particles (e.g. @R05160@ absorption by stratospheric ozone, @S05487@ and absorption by aerosols and clouds), and reflections from the ground. It is therefore dependent on the @W06659@ of the light, on the altitude and on specific local environmental conditions. The @AT07314@ flux has borne many names (e.g. flux, flux density, beam irradiance @AT07314@ irradiance, integrated intensity) which has caused some confusion. It is important to distinguish the @AT07314@ flux from the @S05817@, which refers to energy arrival on a flat surface having fixed spatial orientation (\(\text{J m}^{-2}\ \text{nm}^{-1}\)) given by: \[E(\lambda) = \int _{\theta}\, \int _{\phi} L\left (\lambda,\theta,\varphi \right )\, \text{cos}\,\theta \: \text{sin}\,\theta\: \text{d}\theta\: \text{d}\varphi\] The @AT07314@ flux does not refer to any specific orientation because molecules are oriented randomly in the atmosphere. This distinction is of practical relevance: the @AT07314@ flux (and therefore a j-value) near a brightly reflecting surface (e.g. over snow or above a thick cloud) can be a factor of three higher than that near a non-reflecting surface. The more descriptive name of @S05832@ is suggested for the quantity herein called @AT07314@ flux. See also: flux density, photon
In brief, the autoregressive (AR) terms represents the relationship between $y_t$ and $y_{t-1}$. A simple AR(1) model is: $$y_t=\phi_1 y_{t-1} + \epsilon_{t-1}$$ In words, if $y_{t-1}$ is large, subsequent $y$'s also tend to be large if $\phi>0$ (although, if $\phi$ is less than 1, then $y$ will tend to gradually collapse back down). In an AR(p) process, this is extended to $p$ lagged $y$ terms. Moving average (MA) terms arise from a model like this:$$y_t = \theta_1 \epsilon_{t-1} + \epsilon_{t}$$ More generally, an MA(q) process is a moving average of the last $q$ error terms ....with weights equal to $\theta_1 \ldots \theta_q$. A combination of AR and MA models is called an ARMA model. Finally, having differences in the model (the middle term of the ARIMA model specification in R) means that instead of an ARMA model in $y$, the ARMA model describes $y_t-y_{t-1}$. You also referred to sma1 and sar1 terms ... you can extend the ARIMA model even further to also cover seasonal time series, in which case sma1 and sar1 refer to the coefficients of the lagged errors and $y_t$'s at seasonal periods (ie 12 months ago for an annual model). Rob Hyndman's excellent online textbook Forecasting Principles and Practice contains a chapter on ARIMA models that explains the meaning of the terms in far more detail than above. Other (offline) standard references include Applied Time Series Modelling and Forecasting (Harris) and Time Series Analysis (Hamilton).
Before even getting to the calculation part, it's important to point out that you've mixed up stray capacitance (what you would use for snubber calculations in this case) with interwinding capacitance (which is totally irrelevant for determining your snubber values). While interwinding capacitance does play a role in what kind of harmonics are conducted back into mains, but that's a totally different concern and not a factor in calculating snubber component values. The only way for your bridge rectifier to see this capacitance is if it was connected the same way as you measured it - across the primary and secondary. But they're not - they're connected across the secondary. They're connected to one terminal of this parasitic capacitor, but the other terminal is floating as far as anything purely referenced to the secondary side is concerned. What important is the stray capacitance, which is parasitic capacitance formed across the two secondary leads. Physically, it is the capacitance formed between the windings of one coil to the other windings to that same coil. The capacitance is there, even if it has a fairly low value resistor shorting it (the secondary winding resistance). is With that out of the way, snubbing only has one thing with an 'optimum value', and I think it is beneficial to get a conceptual understanding of what is even going on, as it makes all of this stuff a lot easier to understand, and actually reduces (!) the math you need to worry about. A Moment to Reflect Before one can understand a snubber, one must understand the snubbee. The thing being snubbed. This is, of course, ringing. Ringing is caused by reflections. In transmission lines, and or hitting something with a hammer, or any energy flow. When a sudden discontinuity in the characteristic impedance of a current path, it results in some of that energy getting reflected back towards the source. And a diode, transistor, relay, or other switching element represents more or less the worst possible case of discontinuous impedance - it goes from being the characteristic impedance of that leg of the circuit to effectively infinite (save for a trickle of leakage current) and often in a matter of nano seconds. This is bad. That is going to cause a significant reflection. Reflections contain meaningful amounts of energy, and this energy isn't going to just disappear, it is going to slosh around in whatever will store it until dissipated. And what will store it? The parasitic capacitances and inductances of our circuit of course! Together, they form an LC tank, oscillating at the resonant frequency as determined by the amount of parasitic inductance and capacitance making up the tank. This is the source of the ringing, and what determines the frequency it rings at. Reflections in the context of transmission lines and characteristic impedances can get confusing because this is all very abstract. All you need to understand it however is to understand that the word 'reflection' is not being used metaphorically. These are actual reflections! The kind you are quite familiar with already: the reflection from a piece of glass, the echo off a rock wall, or the heat reflected off the parabola of a heat lamp. Vibration in a chime struck against a wall. This is all we are talking about, and it is common to any movement of energy. Don't let the more abstract quality of this otherwise familiar occurrence through you off - , but might not realize yet that you do. you already understand reflections in transmission lines Stop, Hammer Time Understand that what I am about to say is more just an analogy, but a mechanical equivalence of the same effect. Imagine that the current flowing through the diode (in the reverse direction - it is still the recovery period and the diode hasn't had time to 'turn off' or block the reverse flow yet) is a hammer that you're swinging through the air. There is a small resistance to your hammer swing in the form of air resistance. This is the characteristic impedance. It's the impedance you expect to feel at every point along the swing. However, once the diode slams shut, this is a sharp discontinuity in impedance, one that results in a huge increase of impedance. This is your hammer hitting a hard surface. It brings your swing to a halt, but this doesn't remove all the energy from this situation. Some of the energy of your hammer blow is reflected back into the hammer, causing it to bounce and vibrate (ring) in your hand. It dissipates quickly though, usually in the form of heat - the hammer head will begin to heat up after blow after blow. This is because some of the energy of each swing is being reflected back into the hammer, and this occurs because of a change in mechanical impedance - from moving through air to suddenly encountering a hard barrier, or even just splashing into water. That's all that is going on, even if it all happens invisibly in the circuit. With that in mind, the snubber is simply a way to dissipate some of that reflected energy as heat - just like with the hammer. The hammer is already well-snubbed by the steel it is made out of, but our circuit is not a hammer, it is more like a chime. It rings for a long time and loudly after being struck, so our snubber is like placing your hand on it to end the vibration quickly. Math Time OK, we're actually getting to the answer part! Armed with this conceptual understanding, let's talk RC snubbers. The part about an RC snubber that we actually need to calculate and pick an optimal value for is the 'R' of the snubber. You might have guessed what we are trying to do here already: provide a resistive path that matches the characteristic impedance of the circuit in parallel with the switch. This is simply equal to the impedance due to the parasitic capacitance and inductance (the same thing that also causes the LC tank and the ringing). Which is, of course: $$R=\sqrt{\frac{L}{C}}$$ I would note that in all 3 articles you linked, they all give the same formula, this formula. This is the important part. If we don't match the impedance of the rest (reactive) component of the circuit, then we will still have the same problem with reflection and our snubber won't do much good, or can even make things worse. However, if we just put that resistor in parallel with our switching element... we won't be switching much of anything anymore. There is an entire alternative path now and a diode is made irrelevant in this way. So we add a capacitor in series with the resistor to block DC current from flowing, allowing our switch to actually do something useful. Now, instead of energy getting reflected back towards the source (and into the parasitic tank formed by the capacitance across the diode and the inductance of the transformer secondary and any other parasitics at play), it can continue on smoothly through the same impedance it had been in the form of our snubbing resistor, R, and into our snubber capacitor. The capacitor at a minimum needs to be equal to the parasitic capacitance so it can actually absorb this energy without causing a reflection. The capacitor itself doesn't snub anything, it is merely there to give it somewhere to go that requires going through the resistor, R. The only component that is actually snubbing - or dissipating - this energy is the resistor, R. The imaginary component of complex impedance - reactance - is impedance caused by storing of energy, vs. the real component, which is caused by the dissipation of it. We want to dissipate, not store this energy, and our snubber gives the energy a dissipative path it can go through, reflection free (mostly), when our diode or whatever slams shut like brick wall. However, the resistor won't dissipate all of it immediately. Some still gets stored in the capacitor, and it still gets released back and the ringing will still be there, but the peak amplitude as well as how long it takes to subside will be much less, thanks to now being forced to move through our dissipative element, R, instead of just sloshing around in an LC tank with only the poorly-matched impedance of our secondary winding's resistance to ineffectively dissipate it. Increasing the resistor value will not allow all of it to flow into our snubber capacitor and get reflected back, and less will not dissipate as much of it as we could be, so this really is the one value here that has an optimum value that we need to pick carefully. The rest doesn't really matter very much. Ok, it does, but not in the way you probably think. Remember, the capacitor stores energy, it is doing nothing to help dissipate this leftover energy from the diode turning off. There is no optimum value for the capacitor, beyond that it needs to be greater than the parasitic capacitance across the switch we're snubbing as this ensures that there is room for all of the reflected energy on the other side of the resistor, so it will all flow through the resister, maximizing the dissipation we get. It does have a more subtle effect however. Let's look at the resonant frequency of this ringing: $$f=\frac{1}{2\pi \sqrt{LC}}$$ There is no 'R' in it. Our snubber resistor doesn't change the frequency, but when the diode/switch/whatever is off, our snubber capacitor is in series with the inductance and capacitance we're snubbing. It's now part of the LC tank, and that means the ringing frequency is going to change. Again looking at the resonant frequency equation, we can see that quadrupling the capacitance will reduce the ringing frequency by half. In other words, if we pick a C that is 3 times that of our calculated parasitic capacitances, it will cut the ringing frequency by a factor of 2. This means that it will take twice as long for the reflected energy to flow through our dissipating snubber resistor, and that much more energy (and more power, being energy over time) will be dissipated in the resistor. No Answer, No Cry Why not just make the capacitor huge? It's a trade off. An ugly one. will increase the dissipation demands on the resistor and lower efficiency, or at the extreme, waste excessive amounts of power and needlessly load the transformer and begin to approach the original problem with just using a resistor by itself and remove the solution provided by using a capacitor in series in the first place. The RC snubber has an RC time constant like any other RC series circuit. This needs to be small relative to the on time of our switching element - otherwise it's little different than simply having a resistor shorting out or switch. The on time for a bridge rectifier, assuming 50Hz mains, this would be half the period of 50Hz, or 10ms. The snubber is going to do its thing when the switch is on or off, and the closer the time constant gets to our on time, the more power we'll waste filling that capacitor and dissipating energy that wouldn't ordinarily be part of the reflection. This is a rule of thumb, but you ideally want the time constant of your snubber to be less than 1/10th the on time of your switch, so 1ms in this case. But don't just jump straight to this value either - you'll be placing a lot of extra load on the transformer, dissipating a lot of heat, and for very little real benefit. And 1/10th is somewhat arbitrary, and represents a reasonable maximum before the tradeoff has reached silly extremes. The trade off is up to you, the articles you linked give you everything you need, especially the first one, which goes through the practical considerations of the trade off, such as power dissipated in the resistor and switches. It also gives a very good rule of thumb, which is identical to the rule of thumb in the maxim article, which is simply to not worry that much and pick a value equal to 3 or 4 times the parasitic/intrinsic capacitance of the LC tank. Rules of thumb exist because they give a good balance in a trade-off situation that generally work well for any situation where you don't really need to care that much about the trade off. Unless this is going to be a 1000W amplifier, you probably don't need to care. But minimally, you need the capacitor to be equal or greater than the parasitic capacitance, as this is the minimum needed to store all the energy that will be stored in the inductance. And there are even caveats with the resistance: while there is an optimum value for snubbing out that reflected energy as fast as possible, it is not the optimum value if you care more about, say, the peak voltage that results. If you need to prevent this from overshooting the value of something voltage-sensitive like a MOSFET (which might have a breakdown voltage near the voltage they're switching, unlike bridge rectifiers that often have a break down an order of magnitude or more greater than the voltage they're blocking), then you actually will often select an R that is very much sub optimal, less often less than half the value. Your ringing will be worse but lower in amplitude. I disagree that the articles give different ways of calculating anything - they all give the same formulas as near as I can see: Its sounds like you're hoping for a simple equation for R and C that will just tell you the best snubber to make, but that isn't going to happen because the problem can't be reduced to that. I've told you how to find the best R, and 'best' strictly in the sense of snubbing as much of that reflected energy as possible and in no other way. There is a useful range of capacitances, but the exact value is a trade off with other considerations that are up to you to figure out (or not, and merely use the rule of thumb - which is what I would suggest). And if you are less concerned about noise and more concerned with the peak amplitude, then the 'best' resistor is actually a very poor choice. Which is why I spent so long answering a question you didn't ask - what is actually going on here, because that's what you need to understand to navigate this problem space. And it doesn't just apply to snubbers, it can come in handy over and over again in a variety of different design situations.
LaTeX is a typesetting markup language that is used to create formatted documents. You can use BibTeX to automatically generate & format a bibliography in a LaTeX document. First you need to create a bibliography database file with the extension .bib containing bibliographic entries. You can then use the following commands in your LaTeX document: This is an example of a .bib file called BibFile.bib that has just one bibliographic entry for a book: @Book{gG07, author = "Gratzer, George A.", title = "More Math Into LaTeX", publisher = "Birkhauser", address = "Boston", year = 2007, edition = "4th" } This is an example of a .tex file that refers to the .bib file: \documentclass[12pt]{article} \usepackage{amsmath} \title{An Example Document} \author{John Smith} \date{} \begin{document} \maketitle \section{The first section} This is an example of a document formatted using \LaTeX{}. This is an example of a citation \cite{gG07}. Now here is an example of an equation: \begin{align} i\hbar\frac{\partial}{\partial t}\Psi(r,t) = -\frac{\hbar^2}{2m}\nabla^2\Psi(r,t)+V(r)\Psi(r,t) \end{align} \bibliographystyle{amsplain} \bibliography{BibFile} \end{document} And a screenshot of a section of the resulting typeset output:
In weak interaction phenomenology, especially in strangeness changing processes, effective four-quark operators are used. Such as $Q_1 = (\bar{s}_\alpha \gamma_\mu (1-\gamma_5) d_\alpha) (\bar{u}_\beta \gamma^\mu (1-\gamma_5) u_\beta)$ kind of operators, for example in this, Eq.22, page no. 21. ($\alpha,\beta = 1,2,3 $ are color indices). I needed help in calculating the matrix elements of these operators, let's say, for the process $ \bar{s} \to \gamma^* d \to e^+ e^-$ through a quark-loop, here I have drawn a $u$-quark loop but it can be any quark $q$. The problems that I am facing are: It involves both spinor and color indices. It's very different than calculations involving single kind of leptons where trace-technology is much simpler, but here we have different kinds of quarks. Can anyone please provide answers or link to books or notes where similar calculations are done. For example, explicit cross-section calculations using Fermi's four-fermion operators, even this will be really helpful. Thank you This post imported from StackExchange Physics at 2015-07-03 21:53 (UTC), posted by SE-user quanta
I'll encourage you a little by showing how I might approach your problem and to show you the use of LaTex on this site, as well. Since you are aware of the supernode concept (a term I don't like, but live with), let me approach your problem from that perspective and see if you follow it fine. I'm going to "ground" the bottom supernode (call it \$0\:\textrm{V}\$) and label the upper-left supernode as simply \$V\$. Then it follows that: $$\begin{align*}\frac{V+V_1}{R_2}+\frac{V-V_2}{R_3+R_6}+\frac{V-V_3}{R_4+R_5}&=2\\\\\frac{V}{R_2}+\frac{V}{R_3+R_6}+\frac{V}{R_4+R_5}&=2-\frac{V_1}{R_2}+\frac{V_2}{R_3+R_6}+\frac{V_3}{R_4+R_5}\\\\V\cdot\left[\frac{1}{R_2}+\frac{1}{R_3+R_6}+\frac{1}{R_4+R_5}\right]&=2-\frac{V_1}{R_2}+\frac{V_2}{R_3+R_6}+\frac{V_3}{R_4+R_5}\end{align*}$$ Solving for \$V\$ is easy, now. Just divide the right side by the left side's factor. But once inverted onto the right side, this is the same as putting those resistor groups in parallel, so the resulting equation is: $$V=\left[2-\frac{V_1}{R_2}+\frac{V_2}{R_3+R_6}+\frac{V_3}{R_4+R_5}\right]\cdot\bigg[R_2\vert\vert\left(R_3+R_6\right)\vert\vert\left(R_4+R_5\right)\bigg]$$ That's it. This is the same as dismantling the upper-left node and bottom node (disconnecting all "feeders" into them) and then placing an ammeter between the now-isolated ends to measure individual loop currents, then summing these various measurements into the left side of the above equation. And then returning to the original circuit, but now replacing the voltage sources with their impedance (zero) and the current sources with their impedance (infinite), and then analyzing the resistance between the two nodes in order to form the right side of the above equation. (\$R_1\$, of course, disappears because of the infinite impedance of the current source there.) Now. I've shown you my work. Show me yours. It's more algebra involved to approach the problem with all the nodes and elements you suggested in your question. But the important results will be the same. In any case, I see nothing from you regarding the development of your KVL and KCL equations. Sure, you might worry about over-specification (there may be reason for worry there.) But I don't see any attempts to develop any of those equations. Let's see the attempt, at least. You need to expose your way of thinking about things.
I believe theory developed in two stages here. The work of Frigyes Riesz and others in the early 1900's considered concrete examples, and they spoke about linear functionals without feeling any need to gather them into a structured set (dual space). An analogue is perhaps Weierstrass, who discussed the convergence of sequences of functions in the 1870's without using the notion of a function space with a norm or a topology. The Riesz representation theorem is a good example of this. Riesz (1907) first defines what he means by a continuous linear operation on the space $L^2([a,b])$; this is, in slightly modernized notation, an operation which for any $f\in L^2$ gives a number $U(f)$ such that $U$ is a linear map and such that whenever $f_n\to f$ in $L^2$ we have $U(f_n)\to U(f)$. Then he shows that for each continuous linear operation $U$ there exists a function $k$ such that $U(f)=\int_a^b f(x)k(x)dx$ for all $f\in L^2([a,b])$. Note by the way that the theory was developed in function spaces before finite-dimensional vector spaces. There were many examples of functionals on the form $f\mapsto \int f(x)g(x)dx$ well known at the time (cf. potential theory, or Cauchy's integral theorem), so representation theorems would look very nice. It took another 20 years before abstract Hilbert spaces were defined, and when Riesz speaks of this theorem again in 1935, he can use an entirely modern notation: "For every continuous linear function $\ell(f)$ there is a unique representing element $g$ such that $\ell(f)=(f,g)$", where $(\cdot,\cdot)$ is the inner product on the Hilbert space. The theory for Banach spaces progressed in a similar manner. First linear functionals on $C([a,b])$ and $L^p([a,b])$ were studied and representation theorems were found (ca. 1910). Then in the 1920's a more abstract theory was developed, and in Banach's monograph from 1932 the subject is fully mature with "spaces of type (B)" [Banach spaces] and "conjugate spaces" [dual spaces]. I guess it was necessary to have several similar-looking but different examples before it seemed worth while to construct a general theory. By the way, no author is cited more often in Banach's monograph than Frigyes Riesz!
I guessed $f(a)=a^2$ and $f(a)=0$, but have no idea how to get to the solutions in a good way. Edit: I did what was suggested: from $a=b=0$ $f(0)=0$ The function is even, because from $b=-a$ $f(2a^2)=f(-2a^2)$. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Define $g : \Bbb{R} \to \Bbb{R}$ by $g(x) = f(\sqrt{x})$ and $g(-x) = -g(x)$ for $x \geq 0$. ($g$ is well-defined since $f(0) = 0$.) We claim that Claim.$g$ solves the Cauchy functional equation $$ g(x+y) = g(x) + g(y), \quad x, y \in \Bbb{R} \tag{1}. $$ Proof. Let $x, y \in \Bbb{R}$. If $x, y \geq 0$. then we can pick $a\geq b\geq 0$ such that $a^2 - b^2 = \sqrt{x}$ and $2ab = \sqrt{y}$. (This becomes transparent if we write $(a, b)$ in polar coordinates.) Then we have \begin{align*} g(x+y) &= f(\sqrt{x+y}) = f(a^2 + b^2) \\ &= f(a^2 - b^2) + f(2ab) = g(x) + g(y). \end{align*} If $x, y \leq 0$, then we have $|x| = -x$ and $|y| = -y$ and thus $$g(x+y) = -g(|x|+|y|) = -g(|x|) - g(|y|) = g(x) + g(y)$$ by the definition and $\text{(1)}$. If $x \leq 0$ and $0 \leq |x| \leq y$, then from $g(y) = g(y-|x|) + g(|x|)$, we have $$ g(x) = -g(|x|) = -(g(y) - g(y-|x|)) = g(y+x) - g(y). $$ Rearrange this to get $g(x+y) = g(x) + g(y)$. If $x \leq 0$ and $0 \leq y \leq |x|$, then from $g(|x|) = g(|x|-y) + g(y)$ we have \begin{align*} g(x) &= -g(|x|) = -(g(|x|-y) + g(y)) \\ &= -g(-x-y) - g(y) = g(x+y) - g(y). \end{align*} Rearrange this to get $g(x+y) = g(x) + g(y)$. Interchanging the role of $x$ and $y$, the identity $\text{(1)}$ also holds when $y \leq 0 \leq x$. These cover all the possible sign combinations of $(x, y)$. Therefore $g$ solves $\text{(1)}$. //// Conversely, for any $g$ solving the Cauchy functional equation, $f(x) = g(x^2)$ solves the problem. So we obtain a 1-1 correspondence between the solution of $$ f(a^2 + b^2) = f(a^2 - b^2) + f(2ab), \quad a, b \in \Bbb{R} \tag{2} $$ and the solution of the Cauchy functional equation $\text{(1)}$. Now assuming the Axiom of Choice, the equation $\text{(1)}$ has solutions which is not of the form $g(x) = cx$, which means that $\text{(2)}$ also has solutions which is not of the form $f(a) = ca^2$. If we assume f is twice differentiable, here is another solution: We know $f(0) = 0$, let us differentiate equation w.r.p to a, $$2a f'(a^2 + b^2) = 2af'(a^2 -b ^2) + 2b f'(2ab)$$ Let a = b, we have $$f'(0) = 0$$ Let us differentiate the above equation to b, $$4abf''(a^2 + b^2) = -4abf''(a^2-b^2) + 2f'(2ab) + 4abf''(2ab)$$ Let a = b, we have $$-4a^2f''(0) = 2f'(2a^2)$$ let $x = 2a^2$, so we have $$f'(x) = -2xf''(0)$$ use $f(0) = 0$ and integrate the equation, we have $f(x) = -x^2 f''(0)$. so f is either 0 or f is $kx^2$
Prove that $f \in O(g) \Leftrightarrow g \in \Omega(f)$ I'm curious how that could be shown using limits or another way than the one I'm going to use? Because I don't know another way than this (not even sure if it's alright) : For the proof we assume that we have defined $f \in O(g) \Leftrightarrow \exists c \exists n_0 \forall n \geq n_0: f(n) \leq c \cdot g(n)$ and $f \in \Omega(g) \Leftrightarrow \exists c \exists n_0 \forall n \geq n_0: f(n) \geq c \cdot g(n)$ Now we are supposed to show that $f \in O(g) \Leftrightarrow g \in \Omega(f)$, so we have $$f \in O(g) \Leftrightarrow \exists c \exists n_0 \forall n \geq n_0: f(n) \leq c \cdot g(n) \Leftrightarrow \text{ for }c' = \frac{1}{c} \text{ and for }n_0 \text{ we have that } \forall n \geq n_0: g(n) \geq c' \cdot f(n) \Leftrightarrow g \in \Omega(f)$$ Is there a more soft proof for this? I'm thinking about using limits: Assume we have defined $$\lim_{n\rightarrow \infty} \frac{f(n)}{g(n)}< \infty \Rightarrow f \in O(g)$$ $$\lim_{n\rightarrow \infty} \frac{f(n)}{g(n)}>0 \Rightarrow f \in \Omega(g)$$ But then looking at the definitions, I have no idea where I should start and I wonder if it's possible at all? :o
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
$L$ is CFL because there is a CFG generating it: $$\begin{align}S&\rightarrow aAc\\A&\rightarrow aAc \mid BC\\B&\rightarrow aDb\\C&\rightarrow bEc\\D&\rightarrow aDb \mid \epsilon\\E&\rightarrow bEc \mid \epsilon\end{align}$$ $L$ is not DCFL. To prove this, we give some lemmas firstly: Lemma 1. $$ \begin{align} L &=\{a^x b^y c^z\mid x,y\ge 2, x+y+z\text{ is even},|x-y|+2\le z\le x+y-2\}\\ &=\{a^x b^y c^{|x-y|+2+z}\mid x,y\ge 2, z\text{ is even},0\le z\le x+y-|x-y|-4\}. \end{align} $$ Proof. Let $L'=\{a^x b^y c^z\mid x,y\ge 2, x+y+z\text{ is even},|x-y|+2\le z\le x+y-2\}$. Easy to see $L\subseteq L'$. Now let $a^xb^yc^z\in L'$, then we can choose $m=(x+y-z)/2, n=(x-y+z)/2, k=(y-x+z)/2$. Because $x+y+z$ is even, $m,n,k$ are all integers. Because $|x-y|+2\le z\le x+y-2$, we have $m,n,k\ge 1$, so $a^xb^yc^z\in L$. As a result, $L=L'$. Lemma 2. $M =\{a^x b^y c^{|x-y|+2}d^z\mid x,y\ge 2, z\text{ is even},z\le x+y-|x-y|-4\}$ is not context-free. Proof. Suppose it is context-free, according to Odgen's lemma, there exists some $p\ge 1$ such that $s=a^{p+2}b^{p+2}c^2d^{2p}\in M$ can be written as $s=uvwrt$, such that the number of $d$s in $vwr$ is at most $p$ (i.e. we mark all $d$s), there is at least one $d$ in $vr$, and for all $q\ge 0$, $uv^qwr^qt\in M$. From condition 3, $v$ and $r$ each contains at most one character. Together with condition 2, as $q$ grows, the number of $d$s in $uv^qwr^qt$ grows infinitely. To satisfy the condition $z\le x+y-|x-y|-4=2\min\{x,y\}-4$, the number of $a$ and $b$ must both grow, i.e. $a$ and $b$ must both in $vr$. However, $vr$ contains at most two characters, while $d$ is already in it, $vr$ cannot contain both $a$ and $b$, a contradiction. Now let's come back to $L$. Suppose there is a DPDA $D$ accepting $L$, we create two copies $D_1$ and $D_2$ of $D$ and change the input character $c$ in $D_2$ to $d$. We then construct a new PDA $P$ as follows: The states of $P$ are the union of states of $D_1$ and $D_2$. The start state of $P$ is the start state of $D_1$. The accepting states of $P$ are the accepting states of $D_2$. For a transition in $D_1$, if the destination is an accepting state in $D_1$, change the destination to the corresponding state in $D_2$. Other transitions keep unchanged. Now run $P$ on a string $a^xb^yc^{|x-y|+2}d^z\in M$. According to lemma 1, after reading $a^xb^yc^{|x-y|+2}$, it enters a state in $D_2$ for the first time. Since $a^xb^yc^{|x-y|+2+z}\in L$ according to lemma 1, $P$ will finally accept. On the other hand, if a string $s$ is accepted by $P$, let $s=s_1s_2$ where after reading $s_1$, $P$ enters a state in $D_2$ for the first time. According to lemma 1, $s_1$ must have the form $a^xb^yc^{|x-y|+2}$ ($x,y\ge 2$). Also, $s_2$ does not contain $c$, and after changing all $d$s in $s_2$ to $c$s, $s_1s_2$ belongs to $L$. This means $s$ must have the form $a^xb^yc^{|x-y|+2}d^z$ where $x+y+|x-y|+2+z$ is even (i.e. $z$ is even) and $|x-y|+2+z\le x+y-2$ (i.e. $z\le x+y-|x-y|-4$). Hence $s\in M$. As a result, $P$ recognizes $M$, which contradicts to lemma 2.
Covariance between two random variables defines a measure of how closely are they linearly related to each other. But what if the joint distribution is circlular? Surely there is structure in the distribution. How is this structure extracted? By "circular" I understand that the distribution is concentrated on a circular region, as in this contour plot of a pdf. If such a structure exists, even partially, a natural way to identify and measure it is to average the distribution circularly around its center. (Intuitively, this means that for each possible radius $r$ we should spread the probability of being at distance $r$ from the center equally around in all directions.) Denoting the variables as $(X,Y)$, the center must be located at the point of first moments $(\mu_X, \mu_Y)$. To do the averaging it is convenient to define the radial distribution function $$F(\rho) = \Pr[(X-\mu_X)^2 + (Y-\mu_Y)^2 \le \rho^2], \rho \ge 0;$$ $$F(\rho) = 0, \rho \lt 0.$$ This captures the total probability of lying between distance $0$ and $\rho$ of the center. To spread it out in all directions, let $R$ be a random variable with cdf $F$ and $\Theta$ be a uniform random variable on $[0, 2\pi]$ independent of $R$. The bivariate random variable $(\Xi, H) = (R\cos(\Theta) + \mu_X, R\sin(\Theta)+\mu_Y)$ is the circular average of $(X,Y)$. (This does the job our intuition demands of a "circular average" because (a) it has the correct radial distribution, namely $F$, by construction, and (b) all directions from the center ($\Theta$) are equally probable.) At this point you have many choices: all that remains is to compare the distribution of $(X,Y)$ to that of $(\Xi, H)$. Possibilities include an $L^p$ distance and the Kullback-Leibler divergence (along with myriad related distance measures: symmetrized divergence, Hellinger distance, mutual information, etc.). The comparison suggests $(X,Y)$ may have a circular structure when it is "close" to $(\Xi, H)$. In this case the structure can be "extracted" from properties of $F$. For instance, a measure of central location of $F$, such as its mean or median, identifies the "radius" of the distribution of $(X,Y)$, and the standard deviation (or other measure of scale) of $F$ expresses how "spread out" $(X,Y)$ are in the radial directions about their central location $(\mu_X, \mu_Y)$. When sampling from a distribution, with data $(x_i,y_i), 1 \le i \le n$, a reasonable test of circularity is to estimate the central location as usual (with means or medians) and thence convert each value $(x_i,y_i)$ into polar coordinates $(r_i, \theta_i)$ relative to that estimated center. Compare the standard deviation (or IQR) of the radii to their mean (or median). For non-circular distributions the ratio will be large; for circular distributions it should be relatively small. (If you have a specific model in mind for the underlying distribution, you can work out the sampling distribution of the radial statistic and construct a significance test with it.) Separately, test the angular coordinate for uniformity in the interval $[0, 2\pi)$. It will be approximately uniform for circular distributions (and for some other distributions, too); non-uniformity indicates a departure from circularity. Mutual information has properties somewhat analogous to covariance. Covariance is a number which is 0 for independent variables and nonzero for variables which are linearly dependent. In particular, if two variables are the same, then the covariance is equal to variance (which is usually a positive number). One issue with covariance is that it may be zero even if two variables are not independent, provided the dependence is nonlinear. Mutual information (MI) is a non-negative number. It is zero if and only if the two variables are statistically independent. This property is more general than that of covariance and covers any dependencies, including nonlinear ones. If the two variable are the same, MI is equal to the variable's entropy (again, usually a positive number). If the variables are different and not deterministically related, then MI is smaller than the entropy. In this sense, MI of two variables goes between 0 and H (the entropy), with 0 only if independent and H only if deterministically dependent. One difference from covariance is that the "sign" of dependency is ignored. E.g. $Cov(X, -X) = -Cov(X, X) = -Var(X)$, but $MI(X, -X) = MI(X, X) = H(X)$. Please have a look at the following article from science - it addresses your point exactly: From the abstract: Identifying interesting relationships between pairs of variables in large data sets is increasingly important. Here, we present a measure of dependence for two-variable relationships: the maximal information coefficient (MIC). MIC captures a wide range of associations both functional and not, and for functional relationships provides a score that roughly equals the coefficient of determination (R^2) of the data relative to the regression function. MIC belongs to a larger class of maximal information-based nonparametric exploration (MINE) statistics for identifying and classifying relationships. We apply MIC and MINE to data sets in global health, gene expression, major-league baseball, and the human gut microbiota and identify known and novel relationships. You find supplemental material here: http://www.sciencemag.org/content/suppl/2011/12/14/334.6062.1518.DC1 The authors even provide a free tool incorporating the novel method which can be used with R and Python: http://www.exploredata.net/
Underpinnings of Mass Action: The Ideal Gas The Ideal Gas: The basis for "mass action" and a window into free-energy/work relations The simplest possible multi-particle system, the ideal gas, is a surprisingly valuable tool for gaining insight into biological systems - from mass-action models to gradient-driven transporters. The word "ideal" really means non-interacting, so in an ideal gas all molecules behave as if no others are present. The gas molecules only feel a force from the walls of their container, which merely redirects their momenta like billiard balls. Not surprisingly, it is possible to do exact calculations fairly simply under such extreme assumptions. What's amazing is how relevant those calculations turn out to be, particularly for understanding the basic mechanisms of biological machines and chemical-reaction systems. Although ideal particles do not react or bind, their statistical/thermodynamic behavior in the various states (e.g., bound or not, reacted or not) can be used to build powerful models - e.g., for transporters. Mass-action kinetics are ideal-gas kinetics The key assumption behind mass-action models is that events (binding, reactions, ...) occur precisely in proportion to the concentration(s) of the participating molecules. This certainly cannot be true for all concentrations, because all molecules interact with one another at close enough distances - i.e., at high enough concentrations. In reality, beyond a certain concentration, simple crowding effects due to steric/excluded-volume effects mean that each molecule can have only a maximum number of neighbors. But in the ideal gas - and in mass-action kinetics - no such crowding effects occur. All molecules are treated as point particles. They do not interact with one another, although virtual/effective interactions occur in a mass-action picture. (We can say these interactions are "virtual" because the only effect is to change the number of particles - no true forces or interactions occur.) Pressure and work in an ideal gas Ideal gases can perform work directly using pressure. The molecules of an ideal gas exert a pressure on the walls of the container holding them due to collisions, as sketched above. The amount of this pressure depends on the number of molecules colliding with each unit area of the wall per second, as well as the speed of these collisions. These quantities can be calculated based on the mass $m$ of each molecule, the total number of molecules, $N$, the total volume of the container $V$ and the temperature, $T$. In turn, $T$ determines the average speed via the relation $(3/2) \, N \, k_B T = \avg{(1/2) \, m \, v^2}$. See the book by Zuckerman for more details. We can calculate the work done by an ideal gas to change the size of its container by pushing one wall a distance $d$ as shown above. We use the basic rule of physics that work is force ($f$) multiplied by distance and the definition of pressure as force per unit area. If we denote the area of the wall by $A$, we have If $d$ is small enough so that the pressure is nearly constant, we can calculate $P$ using (1) at either the beginning or end of the expansion. More generally, for a volume change of arbitrary size (from $V_i$ to $V_f$) in an ideal gas, we need to integrate: which assumes the expansion is performed slowly enough so that (1) applies throughout the process. Free energy and work in an ideal gas The free energy of the ideal gas can be calculated exactly in the limit of large $N$ (see below). We will see that it does, in fact, correlate precisely with the expression for work just derived. The free energy depends on temperature, volume, and the number of molecules; for large $N$, it is given by where $\lambda$ is a constant for fixed temperature. For reference, it is given by $\lambda = h / \sqrt{2 \pi m k_B T}$ with $h$ being Planck's constant and $m$ the mass of an atom. See the book by Zuckerman for full details. Does the free energy tell us anything about work? If we examine the free energy change occurring during the same expansion as above, from $V_i$ to $V_f$ at constant $T$, we get Comparing to (3), this is exactly the negative of the work done! In other words, the free energy of the ideal gas decreases by exactly the amount of work done (when the expansion is performed slowly). More generally, the work can be no greater than the free energy decrease. The ideal gas has allowed us to demonstrate this principle concretely. The ideal gas free energy from statistical mechanics The free energy is derived from the "partition function" $Z$, which is simply a sum/integral over Boltzmann factors for all possible configurations/states of a system. Summing over all possibilities is why the free energy encompasses the full thermodynamic behavior of a system. where $\lambda(T) \propto 1/\sqrt(T)$ is the thermal de Broglie wavelength (which is not important for the phenomena of interest here), $\rall$ is the set of $(x,y,z)$ coordinates for all molecules and $U$ is the potential energy function. The factor $1/N!$ accounts for interchangeability of identical molecules, and the integral is over all volume allowed to each molecule. For more information, see the book by Zuckerman, or any statistical mechanics book. The partition function can be evaluated exactly for the case of the ideal gas because the non-interaction assumption can be formulated as $U(\rall) = 0$ for all configurations - in other words, the locations of the molecules do not change the energy or lead to forces. This makes the Boltzmann factor exactly $1$ for all $\rall$, and so each molecule's inegration over the full volume yields a factor of $V$, making the final result Although (8) assumes there are no degrees of freedom internal to the molecule - which might be more reasonable in some cases (ions) than others (flexible molecules) - the expression is sufficient for most of the biophysical explorations undertaken here. References Any basic physics textbook. D.M. Zuckerman, "Statistical Physics of Biomolecules: An Introduction," (CRC Press, 2010), Chapters 5, 7.
I've read in several places that one motivation for category theory was to be able to give precise meaning to statements like, "finite dimensional vector spaces are canonically isomorphic to their double duals; they are isomorphic to their duals as well, but not canonically." I've finally sat down to work through this, and - Okay, yes, it is easy to see that the "canonical isomorphism" from $V$ to $V^{**}$ is a functor that has a natural isomorphism (in the sense of category theory) to the identity functor. Also, I see that there is no way that the functor $V\mapsto V^*$ could have a natural isomorphism to the identity functor, because it is contravariant whereas the identity functor is covariant. My question amounts to: Is contravariance the whole problem? To elaborate: I was initially disappointed by the realization that the definition of natural isomorphism doesn't apply to a pair of functors one of which is covariant and the other contravariant, because I was hoping that the lack of a canonical isomorphism $V\rightarrow V^*$ would feel more like a theorem as opposed to an artifact of the inapplicability of a definition. Then I tried to create a definition of a natural transformation from a covariant functor $F:\mathscr{A}\rightarrow\mathscr{B}$ to a contravariant functor $G:\mathscr{A}\rightarrow\mathscr{B}$. It seems to me that this definition should be that all objects $A\in\mathscr{A}$ get a morphism $m_A:F(A)\rightarrow G(A)$ such that for all morphisms $f:A\rightarrow A'$ of $\mathscr{A}$, the following diagram (in $\mathscr{B}$) commutes: $$\require{AMScd}\begin{CD} F(A) @>m_A>> G(A)\\ @VF(f)VV @AAG(f)A\\ F(A') @>>m_{A'}> G(A') \end{CD}$$ This is much more stringent a demand on the $m_A$ than the typical definition of a natural transformation. Indeed, it is asking that $m_A=G(f)\circ m_{A'}\circ F(f)$, regardless of how $f$ or $A'$ may vary. Taking $\mathscr{A}=\mathscr{B}=\text{f.d.Vec}_k$, $F$ the identity functor and $G$ the dualizing functor, it is clear that this definition can never be satisfied unless $m_V$ is the zero map for all $V\in\text{f.d.Vec}_k$ (because take $f$ to be the zero map). In particular, it cannot be satisfied if $m_V$ is required to be an isomorphism. Is this the right way to understand (categorically) why there is no natural isomorphism $V\rightarrow V^*$? As an aside, are there any interesting cases of some kind of analog (the above definition or another) of natural transformations from covariant to contravariant functors? Note: I have read a number of math.SE answers regarding why $V^*$ is not naturally isomorphic to $V$. None that I have found are addressed to what I'm asking here, which is about how categories make the question and answer precise. (This one was closest.) Hence my question here.
It’s a commonplace to compare Gödel’s theorem to the liar paradox: The sentence This sentence is not true. is neither true nor false. Switch out “provable” for “true” and you get This sentence is not provable. and, modulo some technical stuff, this sentence is then neither provable nor refutable. But of course the “modulo some technical stuff” part is crucial: in particular, the Gödel sentence for a theory does not refer to itself. (It does say something about itself, but in a roundabout way.) It can’t refer to itself, because in the theory, you can only refer to sentences via their Gödel numbers, and a sentence can’t contain the numeral for its own Gödel number, very much like a sentence can’t contain itself in quotation marks. But this talk of “the Gödel sentence says of itself that it’s not provable” suggests something like it. It’s the source of much confusion, and maybe we should avoid the comparison when introducing the incompleteness theorem. Quine’s Paradox is maybe a bit harder to understand, but it is exactly parallel to the proof of the incompleteness theorem, and in fact the diagonal lemma more generally. Here it is, from The Ways of Paradox: If, however, in our perversity we are still bent on constructing a sentence that does attribute falsity unequivocally to itself, we can do so thus: ” ‘Yields a falsehood when appended to its own quotation’ yields a falsehood when appended to its own quotation”. This sentence specifies a string of nine words and says of this string that if you put it down twice, with quotation marks around the first of the two occurrences, the result is false. But that result is the very sentence that is doing the telling. The sentence is true if and only if it is false, and we have our antinomy. Quine’s paradoxical sentence doesn’t refer to itself to produce a paradox. It just refers to the expression ‘yields a falsehood when appended to its own quotation’, by quotation and then again by the anaphoric ‘it’. And the Gödel sentence does the same; as Quine puts it: Gödel’s proof may conveniently be related to the Epimenides paradox or the pseudomenonin the ‘yields a falsehood’ version. For ‘falsehood’ read ‘non-theorem’, thus: ” ‘Yields a non-theorem when appended to its own quotation’ yields a non-theorem when appended to its own quotation”. This statement no longer presents an antinomy, because it no longer says of itself that it is false. What it does say of itself is that it is not a theorem (of some deductive theory that I have not yet specified). If it is true, here is one truth that that deductive theory, whatever it is, fails to include as a theorem. If the statement is false, it is a theorem, in which event that deductive theory has a false theorem and so is discredited. What Gödel proceeds to do, in getting his proof of the incompletability of number theory, is the following. He shows how the sort of talk that occurs in the above statement — talk of non-theoremhood and of appending things to quotations — can be mirrored systematically in arithmetical talk of integers. In this way, with much ingenuity, he gets a sentence purely in the arithmetical vocabulary of number theory that inherits that crucial property of being true if and only if not a theorem of number theory. And Gödel’s trick works for any deductive system we may choose as defining ‘theorem of number theory’. The proof of the diagonal lemma goes something like this. We’re proving that for every \(\psi(x)\) there is a \(\phi\) such that \(\phi \leftrightarrow \psi(\ulcorner \phi \urcorner)\). Think of \(\psi(x)\) as naming a kind, the \(\psi\)s, say, falsehoods or non-theorems. Then to get \(\phi\) which is true iff it is a \(\psi\), we proceed as follows: Define the function \(d\) which maps the Gödel number of \(\alpha(x)\) to the Gödel number of \(\alpha(\ulcorner \alpha(x)\urcorner)\). Supposing we have a function symbol in the language for \(d\), we can define \(\phi\) as \(\psi(d(\ulcorner \psi(d(x))\urcorner))\). If a theory \(T\) represents \(d\) then it proves \(d(\ulcorner \psi(d(x))\urcorner) = \ulcorner\phi\urcorner\). Then, by logic, we also get \(\psi(d(\ulcorner \psi(d(x))\urcorner)) \leftrightarrow \psi(\ulcorner\phi\urcorner)\). But the left-hand side is just \(\phi\), so \(T \vdash \phi \leftrightarrow \psi(\ulcorner\phi\urcorner)\). Here ‘\(\psi(x)\)’ plays the role of ‘falsehood’, ‘non-theorem’, etc. \(d\) is the function that takes an expression and appends it to its own quotation. And \(\phi\) is \(d\) applied to ‘yields a \(\psi\) if appended to its own quotation’, i.e., ‘yields a \(\psi\) if appended to its own quotation’ yields a \(\psi\) if appended to its own quotation. That sentence does not refer to itself, so it doesn’t say of itself in that sense that it is a \(\psi\). But it is equivalent to the statement that it is a \(\psi\). I don’t know which textbooks, if any, mention Quine’s paradox when introducing the diagonal lemma. I’m teaching incompleteness right now at McGill from a version of Jeremy Avigad’s notes, where he makes the connection. Let’s see how it goes over in class.
№ 8 All Issues Volume 66, № 5, 2014 Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 579–597 Let G ⊂ ℂ be a finite region bounded by a Jordan curve L := ∂ G, let \( \Omega :=\mathrm{e}\mathrm{x}\mathrm{t}\overline{G} \) (with respect to \( \overline{\mathbb{C}} \) ), let Δ := { w : | w| > 1}, and let w = Φ( z) be the univalent conformal mapping of Ω onto Δ normalized by Φ (∞) = ∞, Φ′(∞) > 0. Also let h( z) be a weight function and let A p h,G), p > 0 denote a class of functions f analytic in G and satisfying the condition $$ {\left\Vert f\right\Vert}_{A_p\left(h,G\right)}^p:={\displaystyle \int {\displaystyle \underset{G}{\int }h(z){\left|f(z)\right|}^pd{\sigma}_z<\infty, }} $$ where σ is a two-dimensional Lebesgue measure. Moreover, let P n z) be an arbitrary algebraic polynomial of degree at most n ∈ ℕ. The well-known Bernstein–Walsh lemma states that In this present work we continue the investigation of estimation (*) in which the norm \( {\left\Vert {P}_n\right\Vert}_{C\left(\overline{G}\right)} \) is replaced by \( {\left\Vert {P}_n\right\Vert}_{A_p\left(h,G\right)},p>0 \) , for Jacobi-type weight function in regions with piecewise Dini-smooth boundary. Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 598–608 Let $G$ be a finite group. The prime graph of $G$ is denoted by $Γ(G)$. Let G be a finite group such that $Γ(G) = Γ(D_n (5))$, where $n ≥ 6$. In the paper, as the main result, we show that if $n$ is odd, then $G$ is recognizable by the prime graph and if $n$ is even, then $G$ is quasirecognizable by the prime graph. Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 609–618 The twisted Kloosterman sums over Z were studied by V. Bykovsky, A.Vinogradov, N. Kuznetsov, R. W. Bruggeman, R. J. Miatello, I. Pacharoni, A. Knightly, and C. Li. In our paper, we obtain similar estimates for K χ β; γ; q) over ℤ[ i] and improve the estimates obtained for the sums of this kind with Dirichlet character χ (mod q 1), where q 1 | q. Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 619–633 We establish the well-posed solvability of a nonlocal multipoint (in time) problem for the evolution equations with pseudodifferential operators of infinite order. Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 634–644 We establish upper estimates for the approximation of the classes $H_p^{Ω}$ of periodic functions of many variables by polynomials constructed by using the system obtained as the tensor product of the systems of functions of one variable. These results are then used to establish the exact-order estimates of the orthoprojective widths for the classes $H_p^{Ω}$ in the space $L_p$ with $p ∈ \{1, ∞\}$. Multiperiodic Solution of a Boundary-Value Problem for one Class of Parabolic Equations with Multidimensional Time Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 645–655 We study the existence and uniqueness of the multiperiodic solution of the first boundary-value problem for a system of parabolic equations with multidimensional time. Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 656–665 Let G be a group and let Z( G) be the center of G. The commuting graph of the group G is an undirected graph Γ( G) with the vertex set G \ Z( G) such that two vertices x, y are adjacent if and only if xy = yx. We study the commuting graphs of permutational wreath products H G, where G is a transitive permutation group acting on X (the top group of the wreath product) and ( H, Y) is an Abelian permutation group acting on Y. Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 666–678 We prove the theorems on the existence and unique determination of a pair of functions: a( t) >0 , t ∈ [0 ,T] , and the solution u( x, t) of the first boundary-value problem for the equation $$ \begin{array}{ll}{D}_t^{\beta }u-a(t){u}_{xx}={F}_0\left(x,t\right),\hfill & \left(x,t\right)\in \left(0,l\right)\times \left(0,T\right],\hfill \end{array} $$ with regularized derivative D t β u of the fractional order β ∈ (0 , 2) under the additional condition a( t) u x , t) = F( t) , t ∈ [0 ,T] . Periodic and Bounded Solutions of the Coulomb Equation of Motion of Two and Three Point Charges with Equilibrium in the Line Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 679–693 Periodic and bounded solutions of the Coulomb equation of motion in the line are obtained for two and three identical negative point charges in the fields of two and three symmetrically located fixed point charges. The systems possess equilibrium configurations. The Lyapunov, Siegel, Moser, and Weinstein theorems are applied. ↓ Abstract Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 694–698 A subgroup H of a group G is said to be weakly s-semipermutable in G if G has a subnormal subgroup T such that HT = G and H ∩ T ≤ \( {H}_{\overline{s}G} \) , where \( {H}_{\overline{s}G} \) is the subgroup of H generated by all subgroups of H that are s-semipermutable in G. The main aim of the paper is to study the p-nilpotency of a group for which every second maximal subgroup of its Sylow p-subgroups is weakly s-semipermutable. Some new results are obtained. Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 699–707 In the Sobolev-type space with exponential weight, we obtain sufficient conditions for the well-posed and unique solvability on the entire axis of a fourth-order operator-differential equation whose main part has a multiple characteristic. We establish estimates for the norms of the operators of intermediate derivatives related to the conditions of solvability. In addition, we deduce the relationship between the exponent of the weight and the lower bound of the spectrum of the main operator appearing in the principal part of the equation. The obtained results are illustrated by an example of a problem for partial differential equations. Ukr. Mat. Zh. - 2014. - 66, № 5. - pp. 712–720 We study the conditions for the density of a subsequence of a statistically convergent sequence under which this subsequence is also statistically convergent. Some sufficient conditions of this type and almost converse necessary conditions are obtained in the setting of general metric spaces.
For proving the quadratic reciprocity, Gauss sums are very useful. However this seems an ad-hoc construction. Is this useful in a wider context? What are some other uses for Gauss sums? Gauss sums are not an ad-hoc construction! I know two ways to motivate the definition, one of which requires that you know a little Galois theory and the other which is totally mysterious to me. Here is the Galois-theoretic explanation. Let $\zeta_p$ be a primitive $p^{th}$ root of unity, for $p$ prime. The cyclotomic field $\mathbb{Q}(\zeta_p)$ is Galois, so one can define its Galois group, the group of all field automorphisms which preserve $\mathbb{Q}$. Such an automorphism is determined by what it does to $\zeta_p$, and it must send $\zeta_p$ to another primitive $p^{th}$ root of unity. It follows that the Galois group $G = \text{Gal}(\mathbb{Q}(\zeta_p)/\mathbb{Q})$ is isomorphic to $(\mathbb{Z}/p\mathbb{Z})^{\times}$, which is cyclic of order $p-1$. Now suppose $p$ is odd. As a cyclic group of even order, $G$ has a unique subgroup $H$ of index two given precisely by the multiplicative group of quadratic residues $\bmod p$, so by the fundamental theorem of Galois theory the fixed field $\mathbb{Q}(\zeta_p)^H$ is the unique quadratic subextension of $\mathbb{Q}(\zeta_p)$. And it's not hard to see that this unique quadratic subextension must be generated by $$\sum_{\sigma \in H} \sigma(\zeta_p) = \sum_{a \text{ is a QR}} \zeta_p^a = \frac{1}{2} \left( \sum_{a=1}^{p-1} \zeta_p^{a^2} \right)$$ which you will of course recognize as a Gauss sum! So the Gauss sum generates a quadratic subextension, and any of various methods will tell you that this subextension is precisely $\mathbb{Q}(\sqrt{p^{\ast}})$ where $p^{\ast} = (-1)^{ \frac{p-1}{2} } p$. (This does not actually require any computation: if you know enough algebraic number theory, it follows from a consideration of which primes ramify in cyclotomic extensions.) The totally mysterious explanation is that Gauss sums naturally appear when you start thinking about the discrete Fourier transform. For example, the trace of the DFT matrix is a Gauss sum. But more mysteriously, Gauss sums are eigenfunctions of the DFT in a certain sense. (I sketch how this works here.) There is a sort of mysterious connection here to the Gaussian distribution, which is an eigenfunction of the continuous Fourier transform; see this MO question. Again, I don't know what to make of this. There is a book by Berg called The Fourier-analytic proof of quadratic reciprocity and it may or may not be about this construction. Not just quadratic reciprocity, one can use them to prove higherreciprocity laws: see Ireland and Rosen's A Classical Introductionto Modern Number Theory. They also turn up in the functionalequation for Dirichlet L-functions (and are massively generalized in thetopic of root numbers). They are also used to describe something called the Talbot Effect: look at #8 in the list. I attended a seminar by Mike Berry about 12 years ago where he claimed that the Talbot Effect was a physical manifestation of Gauss Sums. Srinivasa Ramanujan actually had discovered some definite integral formulas related to the Gauss sums. Please see the below article: Some definite integrals connected with Gauss sums. Messenger of MathematicsXLIV, $1915$, $75-85$ From Wikipedia: ( Sorry, I can't explain this.) The absolute value of Gauss sumsis usually found as an application of Plancherel's theorem on finite groups. Another application of the Gauss sum: How to prove that: $\tan(3\pi/11) + 4\sin(2\pi/11) = \sqrt{11}$ Gauss sums and exponential sums in general are particularly useful for determining the size of certain algebraic varieties in finite fields or even in general abelian groups. If one defines $$ A_t = \{x \in \mathbb{F}_q^d : f(x) = t\} $$ where $t \in \mathbb{F}_q\setminus\{0\}$, then by orthogonality we have $$ |A_t| = q^{-1} \sum_{s \in \mathbb{F}_q} \sum_{x \in \mathbb{F}_q^d} \chi(s (f(x) - t)), $$ where $\chi$ is any nontrivial additive character on $\mathbb{F}_q$. For example, if one considers $x = (x_1, \dots , x_d) \in \mathbb{F}_q^d$ and defines $f(x) = x_1^2 + \dots + x_d^2$, then $A_t$ would be some finite field analogue of a sphere. Bounding such a set would then be equivalent to bounding $$ q^{-1}\sum_{s \in \mathbb{F}_q} \left(\sum_{x \in \mathbb{F}_q} \chi(sx^2) \right)^d \chi(-st). $$ Gauss Sums and in particular well-known bounds for Gauss sums imply that such a sum is of size $q^{d-1}(1 + o_d(1))$ as $q \to \infty$. As Qiaochu points out above, such bounds are nice to have when one works with the discrete Fourier transform. A small additional note, in line with an earlier answer: Gauss sums are, literally, the Lagrange resolvents obtained in the course of expressing roots of unity in terms of radicals. (Yes, then the Kummer-Stickelberger business can be used to effectively obtain the actual radical expressions...: here .)
I have read two definitions of Sobolev spaces. Definition 1:We let $\lambda$ denote $\lambda^s(\xi)=(1+|\xi|^2)^\frac{s}{2}$ for $s \in \Bbb R$, $\xi \in \Bbb R^n$. We say that $u \in H^s$, if $u \in S'$ and $$||\lambda^s \hat{u} ||_2= (2 \pi)^{-n} \int (1 + |\xi|^2 )^s |\hat{u}(\xi)|^2 \, d \xi < \infty$$ under the identification of $L^2$ in $ S'$. $S'$ is the dual of the Schawrtz Space on $\Bbb R^n$, known as temperate distribution , with respect to the norm on Schawrtz space. What differentiates this definition to the second definition: Definition 2:On the Schwartz space, the Sobolev space $W^s$ is the completion of $S$ with respect to the $s$-norm. $$||u||_s^2 := (2 \pi)^{-n} \int (1 + |\xi|^2 )^s |\hat{u}(\xi)|^2 \, d \xi $$ Is that the first definition is working with topologicial dual on Schwartz space whilst latter is working directly with Schwartz space. Are these the same? Why do we have a $H$ snd a $W$? Sources: First definition is from Xavier's, Introudction to Pseudodifferential Manifolds, Second Definition is From Ebert's notes. I am quite confused, as both of these definitions do not appear in Sobolev spaces. As suggested by user Rhys: So we have a map $S \rightarrow S'$, given $$\phi \mapsto u_\phi= \left( \psi \mapsto \int \psi \bar{\phi} \right)$$ $$||\hat{u}_\phi||_2 = ||u_\phi||$$ by construction. It suffices to show $S$ in the $||\cdot||_2$ norm is (i) Dense in $H^s$(ii) and $H^s$ is complete. Are these true?
Increasing the amount of installed renewable energy sources such as solar and wind is an essential step towards the decarbonization of the energy sector. From a technical point of view, however, the stochastic nature of distributed energy resources (DER) causes operational challenges. Among them, unbalance between production and consumption, overvoltage and overload of grid components are the most common ones. As DER penetration increases, it is becoming clear that incentive strategies such as Net Energy Metering (NEM) are threatening utilities, since NEM doesn’t reward prosumers to synchronize their energy production and demand. In order to reduce congestions, distributed system operators (DSOs) currently use a simple indirect method, consisting of a bi-level energy tariff, i.e. the price of buying energy from the grid is higher than the price of selling energy to the grid. This encourages individual prosumers to increase their self-consumption. However, this is inefficient in regulating the aggregated power profile of all prosumers. Utilities and governments think that a better grid management can be achieved by making the distribution grid ‘smarter’, and they are currently deploying massive amount of investments to enforce this vision. As I explained in my previous post on the need of decentralized architectures for new energy markets, the common view of the scientific community is that a smarter grid requires an increase in the amount of communication between generators and consumers, adopting near real-time markets and dynamic prices, which can steer users’ consumption during periods in which DER energy production is higher, or increase their production during high demand. For example, in California a modification of NEM that allows prosumers to export energy from their batteries during evening peak of demand has been recently proposed. But as flexibility will be offered at different levels and will provide a number of services, from voltage control for the DSOs to control energy for the transmission system operators (TSOs), it is important to make sure that these services will not interfere with each other. So far, a comprehensive approach towards the actuation of flexibility as a system-wide leitmotiv, taking into account the effect of DR at all grid levels, is lacking. In order to optimally exploit prosumers’ flexibility, new communication protocols are needed, which coupled with a sensing infrastructure (smart meters), can be used to safely steer aggregated demand in the distribution grid, up to the transmission grid. The problem of coordinating dispatchable generators is well known by system operators and has been studied extensively in the literature. When not taking into account grid constraints, this is known under the name of economic dispatch, and consists in minimizing the generation cost of a group of power plants . When operational constraints are considered, the problem increases in complexity, due to the power flow equations governing currents and voltages in the electric grid. Nevertheless, several approaches are known for solving this problem, a.k.a. optimal power flow (OPF), using approximations and convex formulations of the underlying physics. OPF is usually solved in a centralized way by an independent system operator (ISO). Anyway, when the number of generators increases, as in the case of DERs, the overall problem increases in complexity but can be still effectively solved by decomposing it among generators. The decomposition has other two main advantages over a centralized solution, apart from allowing faster computation. The first is that generators do not have to disclose all their private information in order for the problem to be solved correctly, allowing competition among the different generators. The second one is that the computation has no single point of failure. In this direction, we have recently proposed a multilevel hierarchical control which can be used to coordinate large groups of prosumers located at different voltage levels of the distribution grid, taking into account grid constraints. The difference between power generators and prosumers is that the latter do not control the time of generated power, but can operate deferrable loads such as heat pumps, electric vehicles, boilers and batteries. The idea is that prosumers in the distribution grid can be coordinated only by means of a price signal sent by their parent node in the hierarchical structure, an aggregator. This allows the algorithm to be solved using a forward-backward communication protocol. In the forward passage each aggregator receives a reference price from its parent node and sends it downwards, along to its reference price, to its children nodes (prosumers or aggregators), located in a lower hierarchy level. This mechanism is propagated along all the nodes, until the terminal nodes (or leafs). Prosumers in leaf nodes solve their optimization problems as soon as they are reached by the overall price signal. In the backward passage, prosumers send their solutions to their parents, which collect them and send the aggregated solution upward. Apart from this intuitive coordination protocol, the proposed algorithm has other favorable properties. One of them is that prosumers only need to share information on their energy production and consumption with one aggregator, while keeping all other parameters and information private. This is possible thanks to the decomposition of the control problem. The second property is that the algorithm exploits parallel computation of the prosumer specific problems, ensuring minimum overhead communication. However, being able to coordinate prosumers is not enough. The main difference between the OPF and DR problem, is that the latter involves the participation of self-serving agents, which cannot be a-priori trusted by an independent system operator (ISO). This implies that if an agent find it profitable (in terms of its own economic utility), he will compute a different optimization problem from the one provided by the ISO. For this reason, some aspects of DR formulations are better described through a game theoretic framework. Furthermore, several studies have focused on the case in which grid constraints are enforced by DSOs, directly modifying voltage angles at buses. Although this is a reasonable solution concept, the current shift of generation from the high voltage network to the low voltage network lets us think that in the future prosumers and not DSOs could be in charge of regulating voltages and mitigating power peaks. With this in mind, we focused on analyzing the decomposed OPF using game theory and mechanism design, which study the behavior and outcomes of a set of agents trying to maximize their own utilities $latex u(x_i,x_{-i})&s=1$, which depend on their own actions $latex x_i &s=1$ and on the action of the other agents $latex x_{-i}&s=1$, under a given ‘mechanism’. The whole field of mechanism design tries to escape from the Gibbard–Satterthwaite theorem, which can be perhaps better understood by means of its corollary: If a strict voting rule has at least 3 possible outcomes, it is non-manipulable if and only if it is dictatorial. It turns out, that the only way to escape from this impossibility result, is adopting money transfer. As such, our mechanism must define both an allocation rule and a taxation (or reward) rule. In this way, the overall value seen by the agents is equal to their own utility augmented by the taxation/remuneration imposed by the mechanism: $latex v_i (x_i,x_{-i})= u_i(x_i,x_{-i}) + c_i(x_i,x_{-i}) &s=1$ Anyway, monetary transfers are as powerful as perilous. When designing taxes and incentives, one should always keep in mind two things: Designing wrong incentives could result in spectacular failures, as we learned from the case of a very anecdotal misuse of incentives from British colonial history, known as the cobra effect If there is a way to fool the mechanism, self-serving prosumers will almost surely find it out. Know that some people will do everything they can to game the system, finding ways to win that you never could have imagined― Steven D. Levitt A largely adopted solution concept, used to rule out most of the strategic behaviors from agents (but not the same as strategyproof mechanism), is the one of ex-post Nash Equilibrium (NE), or simply equilibrium, which is reached when the following set of problems are jointly minimized: $latex \begin{aligned} \min_{x_i \in \mathcal{X}_i} & \quad v(x_i, x_{-i}) \quad \forall i \in \{N\} \\ s.t. & \quad Ax\leq b \end{aligned}&s=1 $ where $latex x_i \in \mathcal{X}_i &s=1$ means that the agents’ actions are constrained to be in the set $latex \mathcal{X}_i &s=1$, which could include for example the prosumer’s battery maximum capacity or the maximum power at which the prosumer can draw energy from the grid. The linear equation in the second row represents the grid constraints, which is a function of the actions of all the prosumers, $latex x = [x_i]_{i=1}^N &s=1$, where N is the number of prosumers we are considering. Rational agents will always try to reach a NE, since in this situation they cannot improve their values given that the other prosumers do not change their actions. Using basic optimization notions, the above set of problems can be reformulated using KKT conditions, which under some mild assumptions ensure that the prosumers’ problems are optimally solved. Briefly, we can augment the prosumers objective function using a first order approximation, through a Lagrangian multiplier $latex \lambda_i$, of the coupling constraints and using the indicator function to encode their own constraints: $latex \tilde{v}_i (x_i,x_{-i}) = v_i (x_i,x_{-i}) + \lambda_i (Ax-b) + \mathcal{I}_{\mathcal{X}_i} &s=1$ The KKT conditions now reads $latex \begin{aligned} 0& \in \partial_{x_i} v_i(x_i,\mathrm{x}_{-i}) + \mathrm{N}_{\mathcal{X}_i} + A_i^T\lambda \\ 0 & \leq \lambda \perp -(Ax-b) \geq 0 \end{aligned} &s=1 $ where $latex \mathrm{N}_{\mathcal{X}_i}&s=1$ is the normal cone operator, which is the sub-differential of the indicator function. Loosely speaking, Nash equilibrium is not always a reasonable solution concept, due to the fact that multiple equilibria usually exists. For this reasons equilibrium refinement concepts are usually applied, in which most of the equilibria are discarded a-priori. Variational NE (VNE) is one of such refinement. In VNE, the price of the shared constraints paid by each agent is the same. This has the nice economic interpretation that all the agents pay the same price for the common good (the grid). Note that we have already considered all the Lagrangian multiplier as equal $latex \lambda_i = \lambda \quad \forall i \in \{N\}&s=1$ in writing the KKT condition. One of the nice properties of the VNE is that for well behaving problems, this equilibrium is unique. Being unique, and with a reasonable economic outcome (price fairness), rational prosumers will agree to converge to it, since at the equilibrium no one is better off changing his own actions while the other prosumers’ actions are fixed. It turns out that a trivial modification of the parallelized strategy we adopted to solve the multilevel hierarchical OPF can be used to reach the VNE. On top of all this, new economic business models must be actuated in order to reward prosumers for their flexibility. In fact, rational agents would not participate in the market if the energy price they pay is higher than what they pay to their current energy retailer. One of such business models is the aforementioned Californian proposal to enable NEM with the energy injected by electrical batteries. Another possible use case is the creation of an self-consumption community, in which a group of prosumers in the same LV grid, pays only at the point of common coupling with the grid of the DSO (which e.g. could be the LV/MV transformer in figure 1). In this way, if the group of prosumers is heterogeneous (someone is producing energy while someone else is consuming), the overall cost that they pay as a community will be always less than what they would have paid as single prosumers, at the loss of the DSO. But if this economic surplus drives the prosumers to take care of power quality in the LV/MV, the DSO could benefit from this business model, delegating part of its grid regulating duties to them. How does blockchain fits in? Synchronizing thousands of entities connected to different grid levels is a technically-hard task. Blockchain technology can be used as a thrust-less distributed database for creating and managing energy communities of prosumers willing to participate to flexibility markets. On top of the blockchain, off-chain payment channels can be used to keep track of the energy consumed and produced by prosumers and to disburse payments in a secure and seamless way. Different business models are possible, and technical solutions as well. But we think that in the distribution grid, the economic value lies in shifting the power production and consumption of the prosumers, enabling a really smarter grid. At Hive Power we are enabling the creation of energy sharing communities where all participants are guaranteed to benefit from the participation, reaching at the same time a technical and financial optimum for the whole community. Key links:
It is strange to me that for a symmetry which involves $\dot{x}$, there seems to always appear a term with $\dddot{x}$ in the variation of the equations of motion, which doesn't makes much sense. I think that probably the procedure I am following is wrong. I will show you an example: Consider the simple case of a free particle in one dimension, it's Lagrangian is: $$L=\frac{1}{2}\dot{x}^2$$ It is obvious that the system conserves energy, so the symmetry that must be valid is $(\delta x=0 ,\delta t=\epsilon)$. I may rewrite these as symmetries that doesn't involve time variations (as the paper by E. L. Hill do): $$(\delta_{*}x=\epsilon\dot{x},\delta_{*}t=0)$$ Now, I calculate the variation of the equations of motion, with the hope of finding that $\delta(\text{e.o.m})=\text{e.o.m}.$ Such result would mean that the equations of motion are invariant under the symmetry in consideration. So: $$\delta{(\text{e.o.m})}=\delta(\ddot{x})=\ddot{\eta}$$ In this case $\eta = \dot{x}$ (remember that a variation is of the form $\delta{x}=\epsilon\eta$). So $\ddot{\eta}=\dddot{x}$. Hence: $$\delta(\text{e.o.m})=\dddot{x}\neq\text{e.o.m}$$ This doesn't make sense because, for the system in consideration, the time translation is a Noetherian symmetry that gives conservation of Energy. My question is: What is failing in this procedure? Is there a general way of showing that some Symmetry is indeed Noetherian?
I've been having difficulty finding a source that lists all the properties of the spinor bundle of a string worldsheet explicitly, so I've had a go at creating my own description. I'd really appreciate it if someone could tell me if the following is true: Take the worldsheet to be some 2d pseudo-Riemannian orientable manifold $M$. One can associate with each point $x \in M$ a 2d tangent space $TM_{x}$. The disjoint union of $TM_{x}$ at all $x$ defines the total space $TM$ of a tangent bundle ($TM$, $\pi_{TM}$, $M$) whos projection is given by: \begin{equation}\pi_{TM}: TM \rightarrow M\end{equation} The worldsheet $M$ is the base space of the tangent bundle and each $TM_{x}$ is a fibre. Since the tangent space is 2d, the bases that exist in each $TM_{x}$ are 2d also. Since the base space is pseudo-Riemannian, so is the tangent space and the ordered bases (frames) that exist on each $TM_{x}$ are 'pseudo-orthonormal'. This would mean that the bases transform under an $O(1,1)$ group. However, Since the base space $M$ is orientable, so is each $TM_{x}$ and that means that the frames are oriented pseudo-orthonormal and transform under $SO(1,1)$ instead. This allows the oriented orthonormal frame bundle (a specific sub-class of principal bundle) to be written as $(F_{SO(1,1)}(M), \pi_{F}, M, SO(1,1))$, where the projection acts as: \begin{equation}\pi_{F}: F_{SO(1,1)}(M) \rightarrow M\end{equation} The fibre $F_{x}$ of this frame bundle at a point $x$ on $M$ is the set of all frames of $TM_{x}$ at the same point $x$. $F_{x}$ is homeomorphic to the gauge group $SO(1,1)$ and is said to be an $SO(1,1)$-torsor. However, now one can define a lift of the group $SO(1,1)$ to $Spin(1,1)$. The corresponding frame bundle is now $(P, \pi_{P}, M, SO(1,1))$ with projection: \begin{equation}\pi_{P}: P \rightarrow M\end{equation} The fibre $P_{x}$ of this frame bundle at a point $x$ on $M$ is the set of all frames of $TM_{x}$ at the same point $x$. $P_{x}$ is homeomorphic to the gauge group $Spin(1,1)$ and is said to be an $Spin(1,1)$-torsor. How can the set of all frames in $TM_{x}$ be homeomorphic to both $SO(1,1)$ and $Spin(1,1)$? The spinor bundle can then be defined to be given by $(S, \pi_{S}, M, \Delta_{(1,1)} Spin(1,1))$, with projection that acts as: \begin{equation}\kappa: S \rightarrow M\end{equation} Here $S$ is given by: \begin{equation}S = P \times_{\kappa} \Delta_{(1,1)} = (P \times \Delta_{(1,1)})/Spin(1,1)\end{equation} The fibre is given by $\Delta_{(1,1)}$ which is the Hilbert space of all spinor states. Each section of this bundle then corresponds to a particular Majorana-Weyl spinor field configuration on the worldsheet.This post imported from StackExchange Physics at 2015-03-04 12:46 (UTC), posted by SE-user Siraj R Khan
The Frenkel Defect (also known as the Frenkel pair/disorder) is a defect in the lattice crystal where an atom or ion occupies a normally vacant site other than its own. As a result the atom or ion leaves its own lattice site vacant. The Frenkel Defect in a Molecule The Frenkel Defect explains a defect in the molecule where an atom or ion (normally the cation) leaves its own lattice site vacant and instead occupies a normally vacant site. As depicted in the picture below, the cation leaves its own lattice site open and places itself between the area of all the other cations and anions. This defect is only possible if the cations are smaller in size when compared to the anions. Figure 1: The Frenkel Defect in a molecule The number of Frenkel Defects can be calculated using the equation: \[ \sqrt{NN^*}\, e^{\delta \frac{H}{2RT}} \tag{2}\] where N is the number of normally occupied positions, \(N^*\) is the number of available positions for the moving ion, the delta H of formation is the enthalpy formation of one Frenkel defect, and R is the gas constant. Frenkel defects are intrinsic defects because the existence causes the Gibbs energy of a crystal to decrease, which means it’s favorable to occur. [2] Molecules Found with a Frenkel Defect The crystal lattices are relatively open and the coordination number is low. References Housecroft, Catherine E., and Alan G. Sharpe. Inorganic Chemistry. 3rd ed. Harlow: Pearson Education, 2008. Print. Tilley, Richard. Understanding Solids.John Wiley & Sons, LTD. 2004. Problems What requirements are needed in order for the Frenkel defect to occur in an atom? What are the differences between the Schottky defect and the Frenkel defect? Answers A low coordination number as well as having the crystal lattices open for the molecule. The Frenkel defect causes an cation to leave its own lattice and go to another, while Sckhotty defect depicts that an equal number of cations and anions must be absent to maintain charge neutraility. Contributors Stanley Hsia, UC Davis
№ 8 All Issues Volume 66, № 8, 2014 Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1011–1028 We give direct proofs of some of Ramanujan’s P-Q modular equations based on simply proved elementary identities from Chapter 16 of his Second Notebook. Exponentially Convergent Method for the First-Order Differential Equation in a Banach Space with Integral Nonlocal Condition Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1029–1040 For the first-order differential equation with unbounded operator coefficient in a Banach space, we study the nonlocal problem with integral condition. An exponentially convergent algorithm for the numerical solution of this problem is proposed and justified under the assumption that the operator coefficient A is strongly positive and certain existence and uniqueness conditions are satisfied. The algorithm is based on the representations of operator functions via the Dunford–Cauchy integral along a hyperbola covering the spectrum of A and the quadrature formula containing a small number of resolvents. The efficiency of the proposed algorithm is illustrated by several examples. Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1041–1057 We establish necessary and sufficient conditions of removability of compact sets. Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1058–1073 We present a brief survey of the development of functional analysis in Ukraine and the problems of infinite-dimensional analysis posed and solved for thousands of years, which laid the foundations of this branch of mathematics. Necessary and Sufficient Conditions for the Solvability of Linear Boundary-Value Problems for the Fredholm Integrodifferential Equations Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1074–1091 We propose a method for the investigation and solution of linear boundary-value problems for the Fredholm integrodifferential equations based on the partition of the interval and introduction of additional parameters. Every partition of the interval is associated with a homogeneous Fredholm integral equation of the second kind. The definition of regular partitions is presented. It is shown that the set of regular partitions is nonempty. A criterion for the solvability of the analyzed problem is established and an algorithm for finding its solutions is constructed. Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1092–1105 We study the problems of analytic theory and the numerical-analytic solution of the integral convolution equation of the second kind $$ \begin{array}{cc}\hfill {\varepsilon}^2f(x)+{\displaystyle \underset{0}{\overset{r}{\int }}K\left(x-t\right)f(t)dt=g(x),}\hfill & \hfill x\in \left[0,r\right)\hfill \end{array}, $$ where $$ \begin{array}{cccc}\hfill \varepsilon >0,\hfill & \hfill r\le \infty, \hfill & \hfill K\in {L}_1\left(-\infty, \infty \right),\hfill & \hfill K(x)={\displaystyle \underset{a}{\overset{b}{\int }}{e}^{-\left|x\right|s}d\sigma (s)\ge 0.}\hfill \end{array} $$ The factorization approach is used and developed. The key role in this approach is played by the V. Ambartsumyan nonlinear equation. Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1106–1116 For a two-dimensional continued fraction another generalization of the Worpitzky theorem is proved and the limit sets are proposed for Worpitzky-like theorems in the case where the element sets of the twodimensional continued fraction are replaced by their boundaries. Trigonometric Approximations and Kolmogorov Widths of Anisotropic Besov Classes of Periodic Functions of Several Variables Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1117–1132 We describe the Besov anisotropic spaces of periodic functions of several variables in terms of the decomposition representation and establish the exact-order estimates of the Kolmogorov widths and trigonometric approximations of functions from unit balls of these spaces in the spaces L q . Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1133–1145 The main aim of the paper is to introduce an operator in the space of Lebesgue measurable real or complex functions L( a, b) . Some properties of the Riemann–Liouville fractional integrals and differential operators associated with the function E α, β, λ, μ, ρ, p γ, δ cz; s, r) are studied and the integral representations are obtained. Some properties of a special case of this function are also studied by the means of fractional calculus. Ukr. Mat. Zh. - 2014. - 66, № 8. - pp. 1146–1152 A subgroup H of a finite group G is said to be Hall S-quasinormally embedded in G if H is a Hall subgroup of the S-quasinormal closure H SQG . We study finite groups G containing a Hall S-quasinormally embedded subgroup of index p n p n G.
This is one of my friends homework question. I tried to solve it and explain to him but I couldn't solve it. The question is simple. Let $ X_n = (\text{# of successes}) - (\text{# of failures}) $ in $n$ Bernoulli trials with possibility of success = $p$, and possibility of failure = $(1-p)$ for each of the Bernoulli trials. Find $E[X_n]$ and $Var[X_n]$. My attempt to find the PMF of $X_n$ is the following: $ f(x)= \begin{cases} p^n, & \text{if } x=n \\ p^{n-1}(1-p)^1 \binom{n}{n-1}, & \text{if } x=n-2 \\ p^{n-2}(1-p)^2 \binom{n}{n-2}, & \text{if } x=n-4 \\ \vdots & \vdots \\ (1-p)^n, & \text{if } x=-n \\ 0 & \text{otherwise} \end{cases} $ More compactly, $ f(n-i)= \begin{cases} p^{(n-\frac{i}{2})} (1-p)^{(\frac{i}{2})}\binom{n}{n-\frac{i}{2}}, & \text{if } 0 \leq i \leq 2n \text{ and } i \% 2 = 0 \\ 0 & \text{otherwise} \end{cases} $ And using this PMF, expected value will be: $ \begin{align} E[x] &= \sum_{i} i f(i) \\ &= \sum_{i \in {0 \leq i \leq 2n \text{ and } i \% 2 = 0}} (n-i) p^{(n-\frac{i}{2})} (1-p)^{(\frac{i}{2})}\binom{n}{n-\frac{i}{2}} \end{align} $ This is where I got stuck. I feel like there is a simpler way to solve this. Any ideas?
I'll expand on the answer by Yuval Filmus by providing an interpretation based on multi-objective optimization problems. Single-objective optimization and approximation In computer science we often study optimization problems with a single objective (for example, minimize f( x) subject to some constraint). When proving, say, NP-completeness, it is common to consider the corresponding budget problem. For example, in the maximum clique problem, the objective is to maximize the cardinality of the clique, and the budget problem is the problem of deciding whether there is a clique of size at least k, where k is given as part of the input to the problem. When it is not possible to compute an optimal solution efficiently, as in the case of the maximum clique problem, we seek an approximation algorithm, a function that outputs a solution within a multiplicative factor of an optimal solution. You could also consider an approximation algorithm for the budget problem, a function that outputs a solution that satisfies f( x) ≥ ck in the case of a maximization problem, where c is a number less than one. In this situation, the solution may violate the hard constraint f( x) ≥ k, but the "severity" of the violation is bounded by c. Multi-objective optimization and bi-criterion approximation In some cases, you may want to optimize two objectives simultaneously. For a rough example, I may want to maximize my "revenue" while minimizing my "cost". In such a situation, there is no single optimal value, as there is a tradeoff between the two objectives; for more information, see the Wikipedia article on Pareto efficiency. One way of turning a two-objective optimization problem into a single-objective optimization problem (for which we can reason about the optimal value of the objective function) is to consider the two constraint problems, one for each objective. If the problem is to simultaneously maximize f( x) while minimizing g( x), the first constraint problem is to minimize g( x) subject to the constraint f( x) ≥ k, where k is given as part of the input to this single-objective optimization problem. The second constraint problem is defined similarly. An ( α, β)- bicriteria approximation algorithm for the first constraint problem is a function that takes a budget parameter k as input and outputs a solution x such that $f(x) \geq \alpha k$, $g(x) \leq \beta g(x^*)$, where $x^*$ is a solution that achieves the optimal value for g. A bicriteria approximation algorithm for the second constraint problem outputs a solution such that $f(x) \geq \alpha f(x^*)$, $g(x) \leq \beta \ell$, In other words, the bicriteria approximation algorithm is simultaneously an appoximation for the budget problem in the first objective and the optimization problem in the second objective. (This definition was adapted from page four of "Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints", by Iyer and Bilmes, 2013.) The inequalities switch directions when the objectives switch from maximum to minimum or vice versa.
I was going though the derivation of intensity of waves from coherent sources for constructive and destructive interference: Suppose you have two sources that are at the same frequency and have the same amplitude and phase but are at different locations. One source might be a distance $x$ away from you and the other a distance $x+\Delta x$ away from you. The waves from these two sources add like: $\displaystyle s(x,t)$ $\textstyle =$ $\displaystyle s_0 \sin(k x - \omega t) + s_0 \sin(k (x + \Delta x) - \omega t)$ The resultant wave at any point is given by y=Asin[(kx-wt)+ϕ] Where $A^2$=${A1}^2$+${A2}^2$+2A1A2cosϕ. Now as I(Intensity) ∝ $A^2$,This equaion can be written as $I^2$=${I1}^2$+${I2}^2$+2I1I2cosϕ. EDIT: Coherent sources have same frequency but they can have varying wavelength so why is wavelength assumed equal here?
The 3SUM problem tries to identify 3 integers $a,b,c$ from a set $S$ of size $n$ such that $a + b + c = 0$. It is conjectured that there is not better solution than quadratic, i.e. $\mathcal{o}(n^2)$. Or to put it differently: $\mathcal{o}(n \log(n) + n^2)$. So I was wondering if this would apply to the generalised problem: Find integers $a_i$ for $i \in [1..k]$ in a set $S$ of size $n$ such that $\sum_{i \in [1..k]} a_i = 0$. I think you can do this in $\mathcal{o}(n \log(n) + n^{k-1})$ for $k \geq 2$ (it's trivial to generalise the simple $k=3$ algorithm). But are there better algorithms for other values of $k$?
Basic Statistics Basic statistics for a discrete variable X: Mean(μ) = Expect Value E[X] =\(\frac{1}{n} \sum_{i=1}^{n} x_{i} \) Median If n is odd then \(x_{\frac{n+1}{2}}\) else \(\frac{x_{\frac{n}{2}} + x_{\frac{n+2}{2}}}{2}\) Variance (\(\sigma^{2}=E_{x \sim p(x)}[(X-E[X])^2]\)) (n-1 is called degree of freedom) =\(\frac{1}{n-1}\sum_{i=1}^{n} (x_{i} -\mu)^{2}\) Standard deviation (\(\sigma\)) =\(\sqrt{\sigma^{2}}\) Mode is the value x at which its probability mass function takes its maximum value. For example the mode of {1,1,1,2,2,3,4} is 1 because it appears 3 times Covariance(X,Y) \(= E[(X-E[X])(Y-E[Y])]\) \(= E[XY] -E[X]E[Y]\) \(= \frac{1}{n-1} \sum_{i=1}^{n} (X_i – μ_X)(Y_i – μ_Y)\) Correlation(X,Y) =\(\frac{Covariance(X,Y)}{\sqrt{Var(X).Var(Y)}}\) Standard error \(= \frac{σ}{\sqrt{n}}\) Basic statistics for a continuous variable X: Mean(μ) = Expect Value E[X] =\(\int_{all\, x} p(x)\,x\,dx\) Median m such as p(x<=m) = .5 Variance (\(\sigma^{2}=E_{x \sim p(x)}[(X-E[X])^2]\)) =\(\int_{all\, x} p(x)\,(x -\mu)^{2}\,dx\) Standard deviation (\(\sigma\)) =\(\sqrt{\sigma^{2}}\) Mode is the value x at which its probability density function has its maximum value Examples: For a set {-1, -1, 1, 1} => Mean = 0, Variance = 1.33, Standard deviation = 1.15 If \(x \in [0, +\infty]\) and p(x) = exp(-x) => Mean = 1, Variance = 1, Standard deviation = 1 Expected value Expectations are linear. E[X+Y] = E[X] + E[Y] Probability Distributions A random variable is a set of outcomes from a random experiment. A probability distribution is a function that returns the probability of occurrence of an outcome. For discrete random variables, this function is called “Probability Mass Function”. For continuous variables, this function is called “Probability Density Function”. A joint probability distribution is a function that returns the probability of joint occurrence of outcomes from two or more random variables. If random variables are independent then the joint probability distribution is equal to the product of the probability distribution of each random variable. A conditional probability distribution is the probability distribution of a random variable given another random variables. Example: A 0.2 B 0.8 P(X) is the probability distribution of X. C 0.1 D 0.9 P(Y) is the probability distribution of Y. A C 0.1 A D 0.1 B D 0.8 P(X,Y) is the joint probability distribution of X and Y. A 0.1/0.9 B 0.8/0.9 P(X|Y=D) is the conditional probability distribution of X given Y = D. Marginal probability Sometimes we know the probability distribution over a set of variables and we want to know the probability distribution over just a subset of them. The probability distribution over the subset is known as the marginal probability distribution. For example, suppose we have discrete random variables x and y, and we know P(x, y). We can find P(x) with the sum rule: \(P(x) = \sum_y P(x,y)\) Below the statistical properties of some distributions. Binomial distribution Nb of output values = 2 (like coins) n = number of trials p = probability of success P(X) = \(C_n^X * p^{X} * (1-p)^{n-X}\) Expected value = n.p Variance = n.p.(1-p) Example: If we flip a fair coin (p=0.5) three time (n=3), what’s the probability of getting two heads and one tail? P(X=2) = P(2H and 1T) = P(HHT + HTH + THH) = P(HHT) + P(HTH) + P(THH) = p.p(1-p) + p.(1-p).p + (1-p).p.p = \(C_3^2.p^2.(1-p)\) Bernoulli distribution Bernoulli distribution is a special case of the binomial distribution with n=1. Nb of output values = 2 (like coins) n = 1 (number of trials) p = probability of success X \(\in\) {0,1} P(X) = \(p^{X} * (1-p)^{1-X}\) Expected value = p Variance = p.(1-p) Example: If we flip a fair coin (p=0.5) one time (n=1), what’s the probability of getting 0 head? P(X=0) = P(0H) = P(1T) = 1-p = \(p^0.(1-p)^1\) Multinomial distribution It’s a generalization of Binomial distribution. In a Multinomial distribution we can have more than two outcomes. For each outcome, we can assign a probability of success. Normal (Gaussian) distribution P(x) = \(\frac{1}{\sigma \sqrt{2\pi}} exp(-\frac{1}{2} (\frac{x-\mu}{\sigma})^{2})\) σ and μ are sufficient statistics (sufficient to describe the whole curve) Expected value = \(\int_{-\infty}^{+\infty} p(x) x \, dx\) Variance = \(\int_{-\infty}^{+\infty} (x – \mu)^2 p(x) \, dx\) Standard Normal distribution (Z-Distribution) It’s a normal distribution with mean = 0 and standard deviation = 1. P(z) = \(\frac{1}{\sqrt{2\pi}} exp(-\frac{1}{2} z^2)\) Cumulative distribution function:\(P(x \leq z) = \int_{-\infty}^{z} \frac{1}{\sqrt{2\pi}} exp(-\frac{1}{2} x^2) \, dx\) Exponential Family distribution \(P(x;\theta) = h(x)\exp \left(\eta (θ)\cdot T(x)-A(θ)\right) \), where T(x), h(x), η(θ), and A(θ) are known functions. θ = vector of parameters. T(x) = vector of “sufficient statistics”. A(θ) = cumulant generating function. \(P(x)=C_n^x\ p^{x}(1-p)^{n-x},\quad x\in \{0,1,2,\ldots ,n\}\) The Binomial distribution is an Exponential Family distribution This can equivalently be written as:\(P(x)=C_n^x\ exp (log(\frac{p}{1-p}).x – (-n.log(1-p)))\) The Normal distribution is an Exponential Family distribution Consider a random variable distributed normally with mean μ and variance \(σ^2\). The probability density function could be written as: \(P(x;θ) = h(x)\exp(η(θ).T(x)-A(θ)) \) With:\(h(x)={\frac{1}{\sqrt {2\pi \sigma ^{2}}}}exp^{-{\frac {x^{2}}{2\sigma ^{2}}}}\) \(T(x)={\frac {x}{\sigma }}\) \(A(\mu)={\frac {\mu ^{2}}{2\sigma ^{2}}}\) \(\eta(\mu)={\frac {\mu }{\sigma }}\) Poisson distribution The Poisson distribution is popular for modelling the number of times an event occurs in an interval of time or space (eg. number of arrests, number of fish in a trap…). In a Poisson distribution, values are discrete and can’t be negative. The probability mass function is defined as: \(P(x=k)=\frac{λ^k.e^{-λ}}{k!}\), k is the number of occurrences. λ is the expected number of occurrences. Exponential distribution The exponential distribution has a probability distribution with a sharp point at x = 0.\(P(x; λ) = λ.1_{x≥0}.exp (−λx)\) Laplace distribution Laplace distribution has a sharp peak of probability mass at an arbitrary point μ. The probability mass function is defined as \(Laplace(x;μ,γ) = \frac{1}{2γ} exp(-\frac{|x-μ|}{γ})\) Laplace distribution is a distribution that is symmetrical and more “peaky” than a normal distribution. The dispersion of the data around the mean is higher than that of a normal distribution. Laplace distribution is also sometimes called the double exponential distribution. Dirac distribution The probability mass function is defined as \(P(x;μ) = δ(x).(x-μ)\) such as δ(x) = 0 when x ≠ μ and \(\int_{-∞}^{∞} δ(x).(x-μ) dx= 1\). Empirical Distribution Other known Exponential Family distributions Dirichlet. Laplace Smoothing Given a set S={a1, a1, a1, a2}. Laplace smoothed estimate for P(x) with domain of x in {a1, a2, a3}:\(P(x=a1)=\frac{3 + 1}{4 + 3}\) \(P(x=a2)=\frac{1 + 1}{4 + 3}\) \(P(x=a3)=\frac{0 + 1}{4 + 3}\) Maximum Likelihood Estimation Given three independent data points \(x_1=1, x_2=0.5, x_3=1,5\), what the mean μ of a normal distribution that these three points are more likely to come from (we suppose the variance=1). If μ = 4, then the probabilities \(P(X=x_1), P(X=x_2), P(X=x_3)\) will be low, and \(P(x_1,x_2,x_3) = P(X=x_1)*P(X=x_2)*P(X=x_3)\) will be also low. If μ = 1, then the probabilities \(P(X=x_1), P(X=x_2), P(X=x_3)\) will be high, and \(P(x_1,x_2,x_3) = P(X=x_1)*P(X=x_2)*P(X=x_3)\) will be also high. Which means that the three points are more likely to come from a normal distribution with mean μ = 1. The likelihood function is defined as: \(P(x_1,x_2,x_3; μ)\) Central Limit Theorem The central limit theorem states that if you have a population with mean μ and standard deviation σ and take sufficiently large random samples from the population with replacement , then the distribution of sample means will be approximately normally distributed. Bayesian Network X1, X2 are random variables. P(X1,X2) = P(X2,X1) = P(X2|X1) * P(X1) = P(X1|X2) * P(X2) P(X1) is called prior probability. P(X1|X2) is called posterior probability. Example: A mixed school having 60% boys and 40% girls as students. The girls wear trousers or skirts in equal numbers; the boys all wear trousers. An observer sees from a distance a student wearing a trouser. What is the probability this student is a girl? The prior probability P(Girl): 0.4 The posterior probability P(Girl|Trouser): \(\frac{P(Trouser|Girl)*P(Girl)}{P(Trouser|Girl) * P(Girl) + P(Trouser|Boy) * P(Boy)} = 0.25\) Parameters estimation – Bayesian Approach Vs Frequentist Approach There are two approaches that can be used to estimate the parameters of a model. \(arg\ \underset{θ}{max} \prod_{i=1}^m P(y^{(i)}|x^{(i)};θ)\) \(arg\ \underset{θ}{max} P(θ|\{(x^{(i)}, y^{(i)})\}_{i=1}^m)\)\(=arg\ \underset{θ}{max} \frac{P(\{(x^{(i)}, y^{(i)})\}_{i=1}^m|θ) * P(θ)}{P(\{(x^{(i)}, y^{(i)})\}_{i=1}^m)}\)\(=arg\ \underset{θ}{max} P(\{(x^{(i)}, y^{(i)})\}_{i=1}^m|θ) * P(θ)\) If \(\{(x^{(i)}, y^{(i)})\}\) are independent, then:\(=arg\ \underset{θ}{max} \prod_{i=1}^m P((y^{(i)},x^{(i)})|θ) * P(θ)\) To calculate P(θ) (called the prior), we assume that θ is Gaussian with mean 0 and variance \(\sigma^2\).\(=arg\ \underset{θ}{max} log(\prod_{i=1}^m P((y^{(i)},x^{(i)})|θ) * P(θ))\) \(=arg\ \underset{θ}{max} log(\prod_{i=1}^m P((y^{(i)},x^{(i)})|θ)) + log(P(θ))\) after some few derivations, we will find the the expression is equivalent to the L2 regularized linear cost function.\(=arg\ \underset{θ}{min} \frac{1}{2} \sum_{i=1}^{n} (y^{(i)}-h_{θ}(x^{(i)}))^{2} + λ θ^Tθ\) Because of the prior, Bayesian algorithms are less susceptible to overfitting. Cumulative distribution function (CDF) Given a random continuous variable S with density function p(s). The Cumulative distribution function \(F(s) = p(S<=s) = \int_{-∞}^{s} p(s) ds\) F'(s) = p(s)
It looks like you're new here. If you want to get involved, click one of these buttons! Okay, now I've rather carefully discussed one example of \(\mathcal{V}\)-enriched profunctors, and rather sloppily discussed another. Now it's time to build the general framework that can handle both these examples. We can define \(\mathcal{V}\)-enriched categories whenever \(\mathcal{V}\) is a monoidal preorder: we did that way back in Lecture 29. We can also define \(\mathcal{V}\)-enriched functors whenever \(\mathcal{V}\) is a monoidal preorder: we did that in Lecture 31. But to define \(\mathcal{V}\)-enriched profunctors, we need \(\mathcal{V}\) to be a bit better. We can see why by comparing our examples. Our first example involved \(\mathcal{V} = \textbf{Bool}\). A feasibility relation $$ \Phi : X \nrightarrow Y $$ between preorders is a monotone function $$ \Phi: X^{\text{op}} \times Y\to \mathbf{Bool} . $$ We shall see that a feasibility relation is the same as a \( \textbf{Bool}\)-enriched profunctor. Our second example involved \(\mathcal{V} = \textbf{Cost}\). I said that a \( \textbf{Cost}\)-enriched profunctor $$ \Phi : X \nrightarrow Y $$ between \(\mathbf{Cost}\)-enriched categories is a \( \textbf{Cost}\)-enriched functor $$ \Phi: X^{\text{op}} \times Y \to \mathbf{Cost} $$ obeying some conditions. But I let you struggle to guess those conditions... without enough clues to make it easy! To fit both our examples in a general framework, we start by considering an arbitrary monoidal preorder \(\mathcal{V}\). \(\mathcal{V}\)-enriched profunctors will go between \(\mathcal{V}\)-enriched categories. So, let \(\mathcal{X}\) and \(\mathcal{Y}\) be \(\mathcal{V}\)-enriched categories. We want to make this definition: Tentative Definition. A \(\mathcal{V}\)-enriched profunctor $$ \Phi : \mathcal{X} \nrightarrow \mathcal{Y} $$ is a \(\mathcal{V}\)-enriched functor $$ \Phi: \mathcal{X}^{\text{op}} \times \mathcal{Y} \to \mathcal{V} .$$ Notice that this handles our first example very well. But some questions appear in our second example - and indeed in general. For our tentative definition to make sense, we need three things: We need \(\mathcal{V}\) to itself be a \(\mathcal{V}\)-enriched category. We need any two \(\mathcal{V}\)-enriched category to have a 'product', which is again a \(\mathcal{V}\)-enriched category. We need any \(\mathcal{V}\)-enriched category to have an 'opposite', which is again a \(\mathcal{V}\)-enriched category. Items 2 and 3 work fine whenever \(\mathcal{V}\) is a commutative monoidal poset. We'll see why in Lecture 62. Item 1 is trickier, and indeed it sounds rather scary. \(\mathcal{V}\) began life as a humble monoidal preorder. Now we're wanting it to be enriched in itself! Isn't that circular somehow? Yes! But not in a bad way. Category theory often eats its own tail, like the mythical ourobous, and this is an example. To get \(\mathcal{V}\) to become a \(\mathcal{V}\)-enriched category, we'll demand that it be 'closed'. For starters, let's assume it's a monoidal poset, just to avoid some technicalities. Definition. A monoidal poset is closed if for all elements \(x,y \in \mathcal{V}\) there is an element \(x \multimap y \in \mathcal{V}\) such that $$ x \otimes a \le y \text{ if and only if } a \le x \multimap y $$ for all \(a \in \mathcal{V}\). This will let us make \(\mathcal{V}\) into a \(\mathcal{V}\)-enriched category by setting \(\mathcal{V}(x,y) = x \multimap y \). But first let's try to understand this concept a bit! We can check that our friend \(\mathbf{Bool}\) is closed. Remember, we are making it into a monoidal poset using 'and' as its binary operation: its full name is \( \lbrace \text{true},\text{false}\rbrace, \wedge, \text{true})\). Then we can take \( x \multimap y \) to be 'implication'. More precisely, we say \( x \multimap y = \text{true}\) iff \(x\) implies \(y\). Even more precisely, we define: $$ \text{true} \multimap \text{true} = \text{true} $$$$ \text{true} \multimap \text{false} = \text{false} $$$$ \text{false} \multimap \text{true} = \text{true} $$$$ \text{false} \multimap \text{false} = \text{true} . $$ Puzzle 188. Show that with this definition of \(\multimap\) for \(\mathbf{Bool}\) we have $$ a \wedge x \le y \text{ if and only if } a \le x \multimap y $$ for all \(a,x,y \in \mathbf{Bool}\). We can also check that our friend \(\mathbf{Cost}\) is closed! Remember, we are making it into a monoidal poset using \(+\) as its binary operation: its full name is \( [0,\infty], \ge, +, 0)\). Then we can define \( x \multimap y \) to be 'subtraction'. More precisely, we define \(x \multimap y\) to be \(y - x\) if \(y \ge x\), and \(0\) otherwise. Puzzle 189. Show that with this definition of \(\multimap\) for \(\mathbf{Cost}\) we have $$ a + x \le y \text{ if and only if } a \le x \multimap y . $$But beware. We have defined the ordering on \(\mathbf{Cost}\) to be the opposite of the usual ordering of numbers in \([0,\infty]\). So, \(\le\) above means the opposite of what you might expect! Next, two more tricky puzzles. Next time I'll show you in general how a closed monoidal poset \(\mathcal{V}\) becomes a \(\mathcal{V}\)-enriched category. But to appreciate this, it may help to try some examples first: Puzzle 190. What does it mean, exactly, to make \(\mathbf{Bool}\) into a \(\mathbf{Bool}\)-enriched category? Can you see how to do this by defining $$ \mathbf{Bool}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Bool}\), where \(\multimap\) is defined to be 'implication' as above? Puzzle 191. What does it mean, exactly, to make \(\mathbf{Cost}\) into a \(\mathbf{Cost}\)-enriched category? Can you see how to do this by defining $$ \mathbf{Cost}(x,y) = x \multimap y $$ for all \(x,y \in \mathbf{Cost}\), where \(\multimap\) is defined to be 'subtraction' as above? Note: for Puzzle 190 you might be tempted to say "a \(\mathbf{Bool}\)-enriched category is just a preorder, so I'll use that fact here". However, you may learn more if you go back to the general definition of enriched category and use that! The reason is that we're trying to understand some general things by thinking about two examples. Puzzle 192. The definition of 'closed' above is an example of a very important concept we keep seeing in this course. What is it? Restate the definition of closed monoidal poset in a more elegant, but equivalent, way using this concept.
Search Now showing items 1-2 of 2 Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector (Elsevier, 2014-11-10) This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ... Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector (Elsevier, 2014-11-10) Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
There's a close connection between counting the number of solutions and randomly sampling from the set of solutions. Any time you need to randomly sample, it's often helpful to ask yourself how you'd count the number of solutions, and then you can often turn that into a way to randomly sample. So, one approach is to use dynamic programming to count the number ways to select numbers from $S$ that sum to $k$ (weighted by their weights $W$), then use that to help you sample from this space. Let me spell out the details more. Define $$f(S,W,k) = \sum_I \prod_{i \in I} w_i,$$ where $I$ ranges over all sets of indices such that $\sum_{i \in I} n_i = k$. Notice that if all the weights were 1, then $f(S,W,k)$ would count the number of ways to select these numbers; in general, with arbitrary weights, you can think of $f(S,W,k)$ as a weighted count, where you sum up the weights of each candidate combination, and the weight of a combination is the product of the weights of the numbers selected. You can compute $f(S,W,k)$ using dynamic programming, using the recurrence $$f(\{n_1,\dots,n_j\},W,k) = f(\{n_1,\dots,n_{j-1}\},W,k) + w_j f(\{n_1,\dots,n_{j-1}\},W,k-n_j).$$ Now you want to sample a set $I$ with probability proportional to $\prod_{i \in I} w_i$. This can be done, using your algorithm for computing $f(S,W,k)$. In particular, if $S=\{n_1,\dots,n_m\}$, then flip a coin with heads probability $${f(\{n_1,\dots,n_{m-1}\},W,k) \over f(\{n_1,\dots,n_m\},W,k)}.$$ If it is heads, then don't include $n_m$ in the solution: instead, recursively sample some numbers from $\{n_1,\dots,n_{m-1}\}$ that sum to $k$, and output that as the random sample. If it is tails, then do include $n_m$ in the solution: recursively sample some numbers from $\{n_1,\dots,n_{m-1}\}$ that sum to $k-n_m$, then add $n_m$ to that combination, and output that as the random sample. You can see that this induces the correct probability distribution on samples. Overall, the running time will be comparable to the time to compute $f(S,W,k)$, or in other words, the running time will be $O(|S| \cdot k)$. This is much better than the $O(|S|^k)$ naive solution. It can still be very slow if $k$ is enormous, but if $k$ is not too big, it might be perfectly satisfactory.
Scaling factor a(t) and Hubble's Parameter H(t) Shortly after the precise quantitative predictions of Einstein’s general relativity concerning the precession of Mercury’s perihelion and the deflection angle of rays of light passing the Sun, Einstein moved beyond investigations of the solar system and applied general relativity to the entire Universe. He wondered what the effects of gravity would be due to all the masses in the Universe. This might seem like an impossible task, but Einstein greatly simplified matters by assuming that the distribution of all the matter in the Universe was spatially uniform. He called this assumption the cosmological principle. This means that the distribution of all mass throughout space is homogenous and isotropic. If the mass distribution is homogenous, then if you draw a line in any direction which extends throughout all of space, all of the mass distribution along that line will be equally spaced; isotropy means that the distribution of mass is the same in all directions. If a distribution of mass is both homogenous and isotropic then it is equally spaced and the same in all direction. Later observations (in particular the Cosmic Microwave Background Radiation) proved that on the scale of hundreds of millions of light-years across space, the distribution of galaxies very nearly (up to very miniscule non-uniformities) is completely homogenous and isotropic; thus on this scale the cosmological principle is a reasonable idealization. Figure 1 Imagine that we draw a line through our galaxy that extends across space for hundreds of millions of light-years. Let’s label this line with equally spaced points which have fixed coordinate values \(x^1\). Imagine that embedded and attached to those points are point-masses (each having a mass \(m\)) which we can think of as galaxies. If we stretch or contract this line, the point-masses (galaxies) will either move away from or towards one another. The coordinate value \(x^1\) of each mass does not change since as we stretch the line, the point embedded in the line and the galaxy remain “overlapping each other.” We shall, for simplicity, consider our galaxy to be located at the origin of the coordinate system at \(x^1 = 0\) although (as we will soon see) the choice of the origin is completely arbitrary. We define the distance between galaxies on this line to be \(D\equiv{a(t)∆x^1}\) where, based on this definition, the scaling factor \(a(t)\) is the distance \(D=a(t)\cdot1=a(t)\) between two galaxies separated by \(∆x^1=1\). (I repeat, the coordinate value \(x^1\) of each galaxies doesn’t change and, therefore, the “coordinate separation” \(∆x^1\) between galaxies doesn’t change.) We will assume that the masses along this line are homogeneously distributed which just means that all of the masses are, at all times \(t\), equally spaced. In other words, at all times \(t\), the distance \(D=a(t)(x^1 - x^1_0) = a(t)\) (where \(∆𝑥¹=(x^1 - x^1_0)=1\)) between any two galaxies on the line separated by \(∆x^1=1\) with any coordinates \(x^1\) and \(x^1_0\); this is just a mathematically precise way of saying that the distance \(D=a(t)\) between two galaxies separated by “one coordinate unit” doesn’t depend on where we are on the line (\(x^1\) and \(𝑥^1_0\) could be anything, the distance will still be the same.) Let’s draw another line (at a right angle to the first) through our galaxy which, also, extends for hundreds of millions of light-years across space. Let’s also label this line with equally spaced points where galaxies of mass \(m\) sit on. We will also assume that the distribution of masses along this line is homogenous (meaning they are all equally spaced) and that the spacing between these points is the same as the spacing between the points on the other line (which means that the total mass distribution along both lines is isotropic). Isotropic just means that the distribution of mass is the same in all directions. The equation \(D=a(t)∆x^2\) is the distance \(D\) between two galaxies on the vertical line drawn in the picture. We can find the distance \(D\) between two galaxies with coordinates \((x^1_0, x^2_0)\) and \((x^1, x^2)\) using the Pythagorean Theorem. Their separation distance \(D_{\text{x^1-axis}}\)along the horizontal line is \(D_{\text{x^1-axis}}=a(t)∆x^1\) and their separation distance \(D_{\text{x^2-axis}}\) along the vertical line is \(D_{\text{x^2-axis}}= a(t)∆x^2\). Using the Pythagorean Theorem, we see that \(D=\sqrt{(D_{\text{x^1-axis}})+(D_{\text{x^2-axis}})}\). To make this equation more compact, let \(∆r=\sqrt{(∆x^1)^2 +(∆x^2)^2}\) which we can think of as the “coordinate separation distance” which doesn’t change. Then we can write the distance as \(D=a(t)∆r\). If we drew a third line going through our galaxy (at right angles to the two other lines), we could find the distance between two points in space with coordinates \(x^i_0 = (x^1_0, x^2_0, x^3_0)\) and \(x^i=(x^1, x^2, x^3)\), using the Pythagorean Theorem in three dimensions, to be $$D=\sqrt{(∆𝑥^1)^2 + (∆𝑥^2)^2 + (∆𝑥^3)^2}.\tag{1}$$ Equation (1) gives us the distance \(D\) between any two points with coordinates \(x^i_0\) and \(x^i\). Since the galaxies always have fixed coordinate values, we can simply view equation (1) as the distance between any two galaxies in space. (Later on, we will come up with a “particles in the box” model where, in general, the particles will not have fixed coordinate values and it will be more useful to think of equation as the distance between coordinate points.) Although the coordinate separation \(∆r\) between galaxies does not change, because (in general) the space can be expanding or contracting, the scaling factor \(a(t)\) (the distance \(D\) between “neighboring galaxies” whose coordinate separation is \(∆r=1\) ) will vary with time \(t\) (where \(t\) is the time measured by an ideal clock which is at rest with respect to our galaxy’s reference frame). (We shall see later on that the FRW equation determines how \(a(t)\) changes with \(t\) based on the energy density \(ρ\) at each point in space and the value of \(κ\).) Since \(a(t)\) is changing with time, it follows that the distance \(D=a(t)∆r\) between any two galaxies is also changing with time. For example, the distance \(D\) between our galaxy and other, far off galaxies is actually growing with time \(t\). The fact that the distance \(D\) between any two galaxies is changing with time according to the scaling factor \(a(t)\), this means that there must be some relative velocity \(V\) between those two galaxies as their separation distance increases with time. To find the relative velocity \(V\) between any two galaxies, we take the time rate-of-change of their separation distance \(D\) to obtain \(V=dD/dt \). \(∆r\) is just a constant and the scaling factor \(a(t)\) is some function of time; thus the derivative is $$V=\frac{dD}{dt}=\frac{d}{dt}(a(t)∆r)=∆r\frac{d}{dt}(a(t)).$$ Let’s multiply the right-hand side of the equation by \(a(t)/a(t)\) to get $$V=a(t)∆r\frac{d/dt(a(t))}{a(t)}.$$ \(a(t)∆r\) is just the distance \(D\) between the two galaxies moving away at a relative velocity \(V\); thus, $$V=D\frac{d/dt(a(t))}{a(t)}.$$ The term \frac{d/dt(a(t))}{a(t)} is called Hubble’s parameter which is represented by \(H(t)\): $$H(t)=\frac{da(t)/dt}{a(t)}.\tag{2}$$ Substituting Hubble's parameter for \frac{d/dt(a(t))}{a(t)}, we get $$V=H(t)D.\tag{3}$$ The value of Hubble’s parameter at our present time is called Hubble’s constant and is represent by \(H(today)=H_0\) . Thus, at our present time, the recessional velocities between any two galaxies is given by $$V=H_0D.\tag{4}$$ and the value of Hubble’s constant has been measured to be $$H_0≈500\text{ km/s/Mpc}=160\text{ km/s}.\tag{5}$$ Since \(H_0\) is a positive constant, this tells us that (at \(t=today\), not later times, because \(H(t)\) varies with time) the farther away a galaxy or object is from us (our galaxy), the faster it’s moving away. The bigger \(D\) is, the bigger \(V\) is. By substituting Equation (5) into Equation (4) and by measuring the separation distance \(D\) between any two galaxies, we can use Equation (4) to calculate the relative, recessional speeds between those galaxies—today. To determine \(V\) as a function of time, you must compute \(a(t)\) from the FRW equation, then substitute \(a(t)\) into Equation (3); but this will be discussed later on. By substituting sufficiently big values of \(D\) (namely, values which are tens of billions of light-years) into Equation (4), one will discover that it is possible for two galaxies to recede away from one another at speeds exceeding that of light. This, however, does not violate the special theory of relativity which restricts the speeds of massive objects through space to being less than that of light. This is because it is space itself which is expanding faster than the speed of light and general relativity places no limit on how rapidly space or spacetime can expand or contract. It might seem unintuitive, but the two coordinate points \(x^i_0\) and \(x^i\) are not actually moving through space at all. Of course, the galaxies do have some motion and velocity through space; but it is a useful idealization and approximation to assume that they are "attached" to the coordinate points and not moving through space at all. Sir Arthur Edington’s favorite analogy for this was an expanding balloon with two points drawn on its surface. As the balloon expands, the points are indeed moving away from one another; but those points are not actually moving across the space (which in this example, the space is the surface \(S^2\).) Age of the universe We can use Hubble’s Law to come up with a rough estimate of the age of the universe. If all of the galaxies are moving away from one another then that means that yesterday they must have been closer to one another—and a week ago even closer. If you keep running the clock back far enough, then at some time all of the galaxies and matter in the universe must have been on top of each other. Let’s assume that during that entire time interval (which we’ll call \(t_{\text{age of the universe}}\)) the recessional velocity \(V\) of every galaxy is exactly proportional to \(D\) (which, empirically, is very close to being true). Then it follows that the ratio \(D/V = 1/H\) is the same for every galaxy. Since \(1/H\) stays the same, it follows that \(1/H = 1/H_0\). Let’s also assume that during the entire history of the universe the velocity \(V\) of every galaxy remained constant. Then, according to kinematics, the time \(t_{\text{age of the universe}}\) that it took for every galaxy to go from being on top of one another (when \(D=0\)) to being where they are today is given by the equation \(t_{\text{age of the universe}}=D/V=1/H_0≈\text{14 billion years}\). (When this calculation was first performed it gave an estimate for the age of the universe of only about 1.8 billion years. Although Hubble correctly measured the recessional velocities of the galaxies, his distance measurements were off by about a factor of ten. Later astronomers corrected his distance measurements.) To come up with a more accurate age of the universe we have to account for the acceleration/deceleration of the galaxies. When we do this we are able to obtain the more accurate estimate which is given by \(t_{\text{age of the universe}}≈\text{13.8 billion years}\).
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in... Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch... Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen... Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl... People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f... Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a... I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac... This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s... There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com... Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not... Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}... I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo... Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a... I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst... Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ... NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ... I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au... This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't... This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few... This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme... EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc... Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu... Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d... I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa... To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co... Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik... I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like. I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have... It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl... Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,... One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi... Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case. What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?...
In preparation for my design and algorithms exam, I encountered the following problem. Given a $2 \times N$ integer matrix $(a[i][j] \in [-1000, 1000])$ and an unsigned integer $k$, find the maximum cost path from the top left corner $(a[1][1])$ and the bottom right corner $a[2][N]$, given the following: $\bullet$ The path may not go through the same element more than once $\bullet$ You can move from one cell to the other vertically or horizontally $\bullet$ The path may not contain more than $k$ consecutive elements from the same line $\bullet$ The cost of the path is determined by the sum of all the values stored within the cells it passes through I've been thinking of a simple greedy approach that seems to work for the test cases I've tried, but I'm not sure if it's always optimal. Namely, while traversing the matrix, I select a the maximum cost cell on the vertical or horizontal and then go from there. I've got a counter I only increment if the current cell is on the same line with the previously examined one and reset it otherwise. If at some point the selected element happens to be one that makes the counter go over the given value of $k$, I simply go with the other option that's left. However, I feel that I'm missing out on something terribly important here, but I just can't see what. Is there some known algorithm or approach that may be used here? Also, the problem asks for an optimal solution (regarding temporal complexity).
Math question on Newton's method and detecting actual zeros 02-07-2017, 05:04 PM Post: #1 Math question on Newton's method and detecting actual zeros (Admins: If this is in the wrong forum, please feel free to move it) This came up during a debugging process in which Newton's method (using backtracking linesearch) gave me a solution to the system \[ \frac{x\cdot y}{x+y} = 127\times 10^{-12}, \quad \left( \frac{x+y}{x} \right)^2 = 8.377 \] (This problem was posed on the HP Prime subforum: http://hpmuseum.org/forum/thread-7677.html) One solution I found was: \( x=1.94043067156\times 10^{-10}, \ y=3.67576704293\times 10^{-10} \) (hopefully no typos). On the Prime, the error for the equations are in the order of \(10^{-19} \) and \(10^{-11}\) for the first and second equations, respectively (again, assuming I made no typos copying). So my question is: should a numerical solver should treat \(1.27\times 10^{-10}\) as "significant" or 0 (especially when it comes time to check for convergence, when the tolerance for \( |f_i| \) might be set to, say, \( 10^{-10} \) -- here \( f_i \) is the i-th equation in the system, set equal to 0)? Graph 3D | QPI | SolveSys 02-07-2017, 06:45 PM Post: #2 RE: Math question on Newton's method and detecting actual zeros . Hi, Han: (02-07-2017 05:04 PM)Han Wrote: (Admins: If this is in the wrong forum, please feel free to move it) Your system is trivial to solve by hand, like this: 1) Parameterize: y = t*x 2) Substitute y=t*x into the first equation (a = 127E-12): x*t*x = a*(x+t*x) -> t*x^2 = a*(1+t)*x -> (assuming x is not 0, which would make the second equation meaningless) t*x = a*(1+t) -> x = a*(1+t)/t 3) Substitute y=t*x in the second equation (b=8.377) (1+t)^2 = b -> 1+t= sqr(b) -> t = sqr(b)-1 or t = -sqr(b)-1 4) let's consider the first case (the second is likewise): t = sqr(b)-1 = 1.8943047524405580466334231771918 5) substitute the value of t in the first equation above in (2): x = a*(1+t)/t = 1.9404306676968291608003859882111e-10 6) now, y=t*x, so: y = t*x = 3.6757670355995087192244474350336e-10 which gives your solution. Taking the negative sqrt would give another. As for your question, the best way to check for convergence is not to rely on some tolerance for the purported zero value when evaluating both equations for the computed x,y approximations in every iteration but rather to stop when consecutive approximations differ in less than a user-set tolerance expressed in ulps, i.e. units in the last place. For instance, if you're making your computation with 10 digits and you set your tolerance to 2 ulps you would stop iterating as soon as consecutive approximations for both x and y have 8 digits in common (mantissa digits, regardeless of the exponents which of course should be the same). Once you stop the iterations you should then check the values of f(x,y) and g(x,y) to determine whether you've found a root, a pole, or an extremum (maximum, minimum) but as far as stopping the iterations is concerned, the tolerance in ulps is the one to use for best results as it is completely independent of the magnitude of the roots, they might be of the order of 1E38 or of 1E-69 and it wouldn't matter. Regards. V. . 02-07-2017, 08:03 PM Post: #3 RE: Math question on Newton's method and detecting actual zeros (02-07-2017 06:45 PM)Valentin Albillo Wrote: . Thank you for the detailed solution; though in truth it was merely to present a case where a function might itself produce outputs that are extremely tiny. The math I understand quite well; it's the computer science part of implementing Newton's method that was giving me trouble. Your explanation above regarding ulps was precisely the answer I was looking for. Graph 3D | QPI | SolveSys User(s) browsing this thread: 1 Guest(s)
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
To prove the existence of $\aleph_1$ we use the concept of Hartogs number. The question asks, really, why are there uncountable ordinals, since $\aleph_1$ is by definition the least ordinal which is not countable. Take a set of cardinality $\aleph_0$, say $\omega$. Now consider all the orders on $\omega$ which are well-orders, and consider the order isomorphism as an equivalence relation. The collection of all equivalence classes is a set. Fact: If $(A,R)$ is a well-ordered set, then there exists a unique ordinal $\alpha$ such that $(A,R)\cong(\alpha,\in)$. Map every equivalence class to the unique ordinal which is order isomorphic to the members of the class. We now have a set and all its members are ordinals which correspond to possible well-ordering of $\omega$. Fact: The union of a set of ordinals is an ordinal, it is in fact the supremum of the elements in the union. Let $\alpha$ be the union of the set defined above. We have that $\alpha$ is an ordinal, and that every ordinal below $\alpha$ is a possible well-ordering of $\omega$ (and therefore countable). Suppose $\alpha$ was countable too, then $\alpha+1$ was also countable (since $\alpha+1=\alpha\cup\{\alpha\}$), and therefore a possible well ordering of $\omega$. This would contradict the above fact that $\alpha$ is greater or equal to all the ordinals which correspond to well-orderings of $\omega$, since $\alpha<\alpha+1$. This means that $\alpha$ is uncountable, and that it is the first uncountable ordinal, since if $\beta<\alpha$ then $\beta$ can be injected into $\omega$, and so it is countable. Therefore we have that $\alpha=\omega_1=\aleph_1$. Note that the above does not require the axiom of choice and holds in $\sf ZF$. The collection of all well-orders is a set by power set and replacement, so is the set of equivalence classes, from this we have that the collection of ordinals defined is also a set (replacement again), and finally $\alpha$ exists by the axiom of union. There was also no use of the axiom of choice because the only choice we had to do was of "a unique ordinal" which is a definable map (we can say when two orders are isomorphic, and when a set is an ordinal - without the axiom of choice). With the axiom of choice this can be even easier: From the axiom of choice we know that the continuum is bijectible with some ordinal. Let this order type be $\alpha$, now since the ordinals are well-ordered there exists some $\beta\le\alpha$ which is the least ordinal which cannot be injected into $\omega$ (that is there is no function whose domain is $\beta$, its range is $\omega$ and this function is injective). From here the same argument as before, since $\gamma<\beta$ implies $\gamma$ is countable, $\beta$ is the first uncountable ordinal, that is $\omega_1$. As to why there is no cardinals strictly between $\aleph_0$ and $\aleph_1$ (and between any two consecutive $\aleph$-numbers) also stems from this definition. $\aleph_0 = |\omega|$, the cardinality of the natural numbers, $\aleph_{\alpha+1} = |\omega_{\alpha+1}|$, the cardinality of the least ordinal number which cannot bijected with $\omega_\alpha$, $\aleph_{\beta} = \bigcup_{\alpha<\beta}\aleph_\alpha$, at limit points just take the supremum. This is a function from the ordinals to the cardinals, and this function is strictly increasing and continuous. Its result is well-ordered, i.e. linearly ordered, and every subset has a minimal element. This implies that $\aleph_1$ is the first $\aleph$ cardinal above $\aleph_0$, i.e. there are no others between them. Without the axiom of choice, however, there are cardinals which are not $\aleph$-numbers, and it is consistent with $\sf ZF$ that $2^{\aleph_0}$ is not an $\aleph$ number at all, and yet there are not cardinals strictly between $\aleph_0$ and $2^{\aleph_0}$ - that is $\aleph_0$ has two distinct immediate successor cardinals. For the second question, there is no actual limit. Within the confines of a specific model, the continuum is a constant, however using forcing we can blow up the continuum to be as big as we want. This is the work of Paul Cohen. He showed that you can add $\omega_2$ many subsets of $\omega$ (that is $\aleph_2\le 2^{\aleph_0}$), and the proof is very simple to generalize to any higher cardinal. In fact Easton's theorem shows that if $F$ is a function defined on regular cardinals, which has a very limited set of constraints, then there is a forcing extension where $F(\kappa) = 2^\kappa$, so we do not only violate $\sf CH$ but we violate $\sf GCH$ ($2^{\aleph_\alpha}=\aleph_{\alpha+1}$) in a very acute manner.
Spectral properties of higher order anharmonic oscillators 59 Downloads Citations We discuss spectral properties of the selfadjoint operator \( \begin{gathered} - \frac{{{d^2}}}{{d{t^2}}} + {\left( {\frac{{{t^{k + 1}}}}{{k + 1}} - \alpha } \right)^2} \hfill \\ \hfill \\ \end{gathered} \) in L 2(ℝ) for odd integers k. We prove that the minimum over α of the ground state energy of this operator is attained at a unique point which tends to zero as k tends to infinity. We also show that the minimum is nondegenerate. These questions arise naturally in the spectral analysis of Schrödinger operators with magnetic field. Bibliography: 13 titles. Illustrations: 2 figures. KeywordsSpectral Property Ground State Energy Trial Function Schwarz Inequality Selfadjoint Operator Preview Unable to display preview. Download preview PDF. References 1. 2. 3. 4. 5.B. Helffer, Y. A. Kordyukov, “Semi-classical analysis of Schrödinger operators with magnetic wells,” Contemporary Math.(2009). [To appear]Google Scholar 6. 7.B. Helffer, “The Montgomery model revisited,” In: Colloquium Mathematicum, volume in honor of A. Hulanicki(2009). [To appear]Google Scholar 8. 9.B. Helffer, Y. A. Kordyukov, Complements on Montgomery like model : k> 1, (2009). [Unpublished]Google Scholar 10. 11. 12. 13.S. Flügge, Practical Quantum Mechanics. Springer, Berlin (1999); A translation of the 1947 German original.Google Scholar
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them but still, the first one from well, almost a decade ago shows up as the default content in the search window 1,2,3,6,11,23,47,106,235 well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go oh well "what would cotton mathers do?" the chat room unanimously ponders lol i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway? or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please very general advice for any number of topics for someone like yourself sir assuming gender because you should hate text based adam long ago if you were female or etc if its false then I apologise for the statistical approach to human interaction So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field? (I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.) (which is just the product of the integer and its conjugate) Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$ You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings (Plus I'm at work and am pretending I'm doing my job) Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit. @Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$ this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$ the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$ (just as a quotient of additive groups, that quotient group is finite) in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$ there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus) @MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively. $\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first: By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$. The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$. @Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics @MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$? Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists... As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour Or more likely, we will need to start recognising machines as a new species and interact with them accordingly so covert operations AI may still exists, even as domestic AIs continue to become widespread It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other that is, until their processing power become so strong that they can outdo human thinking But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy. I was just genuinely curious How does a message like this come from someone who isn't trolling: "for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise" 3 Anyway feel free to continue, it just seems strange @Adam I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree? So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$) @RyanUnger You're the guy to ask for this sort of thing I think: If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way? I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right? We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method. How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ? @anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues
When we work with inequalities, we can usually treat them similarly to but not exactly as we treat equalities. We can use the addition property and the multiplication property to help us solve them. The one exception is when we multiply or divide by a negative number; doing so reverses the inequality symbol. A General Note: Properties of Inequalities [latex]\begin{array}{ll}\text{Addition Property}\hfill& \text{If }a< b,\text{ then }a+c< b+c.\hfill \\ \hfill & \hfill \\ \text{Multiplication Property}\hfill & \text{If }a< b\text{ and }c> 0,\text{ then }ac< bc.\hfill \\ \hfill & \text{If }a< b\text{ and }c< 0,\text{ then }ac> bc.\hfill \end{array}[/latex] These properties also apply to [latex]a\le b[/latex], [latex]a>b[/latex], and [latex]a\ge b[/latex]. Example 3: Demonstrating the Addition Property Illustrate the addition property for inequalities by solving each of the following: a. [latex]x - 15<4[/latex] b. [latex]6\ge x - 1[/latex] c. [latex]x+7>9[/latex] Solution The addition property for inequalities states that if an inequality exists, adding or subtracting the same number on both sides does not change the inequality. a. [latex]\begin{array}{ll}x - 15<4\hfill & \hfill \\ x - 15+15<4+15 \hfill & \text{Add 15 to both sides.}\hfill \\ x<19\hfill & \hfill \end{array}[/latex] b. [latex]\begin{array}{ll}6\ge x - 1\hfill & \hfill \\ 6+1\ge x - 1+1\hfill & \text{Add 1 to both sides}.\hfill \\ 7\ge x\hfill & \hfill \end{array}[/latex] c. [latex]\begin{array}{ll}x+7>9\hfill & \hfill \\ x+7 - 7>9 - 7\hfill & \text{Subtract 7 from both sides}.\hfill \\ x>2\hfill & \hfill \end{array}[/latex] Try It 3 Solve [latex]3x - 2<1[/latex]. Example 4: Demonstrating the Multiplication Property Illustrate the multiplication property for inequalities by solving each of the following: [latex]3x<6[/latex] [latex]-2x - 1\ge 5[/latex] [latex]5-x>10[/latex] Solution a. [latex]\begin{array}{l}3x<6\hfill \\ \frac{1}{3}\left(3x\right)<\left(6\right)\frac{1}{3}\hfill \\ x<2\hfill \end{array}[/latex] Try It 4 Solve [latex]4x+7\ge 2x - 3[/latex]. Solving Inequalities in One Variable Algebraically As the examples have shown, we can perform the same operations on both sides of an inequality, just as we do with equations; we combine like terms and perform operations. To solve, we isolate the variable. Example 5: Solving an Inequality Algebraically Solve the inequality: [latex]13 - 7x\ge 10x - 4[/latex]. Solution Solving this inequality is similar to solving an equation up until the last step. The solution set is given by the interval [latex]\left(-\infty ,1\right][/latex], or all real numbers less than and including 1. Try It 5 Solve the inequality and write the answer using interval notation: [latex]-x+4<\frac{1}{2}x+1[/latex]. Example 6: Solving an Inequality with Fractions Solve the following inequality and write the answer in interval notation: [latex]-\frac{3}{4}x\ge -\frac{5}{8}+\frac{2}{3}x[/latex]. Solution We begin solving in the same way we do when solving an equation. The solution set is the interval [latex]\left(-\infty ,\frac{15}{34}\right][/latex]. Try It 6 Solve the inequality and write the answer in interval notation: [latex]-\frac{5}{6}x\le \frac{3}{4}+\frac{8}{3}x[/latex].