text
stringlengths
256
16.4k
From what I know of analyzing and designing approximation algorithms, we need to find a lower bound on the optimum (in the case of minimization). For example if our solution is greedy ($SOL_G$) and if we find a lower bound $L_B$ on $OPT$ then $$L_B \leq OPT$$ and if we correctly analyze the following $$SOL_G \leq f . L_B$$ Then we can conclude that $$SOL_G \leq f. OPT$$ where $f$ is the approximation factor of our greedy solution. Vijay Vazirani in his book approximation algorithm on page 16 proposed the Greedy Set cover and his analysis of the lower bound on the optimum ( Lemma 2.3). I have problem with understanding this Lemma: From his convention at the beginning of the book, $OPT$ stands for the optimum solution of the problem instance (not the optimum that is obtained by the approximation algorithm). At the beginning of the Lemma he says In any iteration, the leftover sets of the optimal solution can cover the remaining elements at a cost of at most OPT It is clear because $\sum_{e \in U} price(e) = OPT$. (The price assignment is based on the selection of the sets) In the next sentence he says Therefore, among these sets, there must be one having cost-effectiveness of at most $OPT/|\bar{C}|$ where $\bar{C} = U - C$ and $C$ is the current list of selected sets. I can't understand, why is it true? $OPT$ is the optimum of problem instance not the algorithm. Because in the algorithm the price of each set is distributed among its elements, it seems he uses this fact but $OPT$ is not about the algorithm! it is the optimum of problem instance Thanks.
Viewing posts for the category Featured on frontpage The magnetic field due to a magnetic dipole moment, $\boldsymbol{m}$ at a point $\boldsymbol{r}$ relative to it may be written $$ \boldsymbol{B}(\boldsymbol{r}) = \frac{\mu_0}{4\pi r^3}[3\boldsymbol{\hat{r}(\boldsymbol{\hat{r}} \cdot \boldsymbol{m}) - \boldsymbol{m}}], $$ where $\mu_0$ is the vacuum permeability. In geomagnetism, it is usual to write the radial and angular components of $\boldsymbol{B}$ as: $$ \begin{align*} B_r & = -2B_0\left(\frac{R_\mathrm{E}}{r}\right)^3\cos\theta, \\ B_\theta & = -B_0\left(\frac{R_\mathrm{E}}{r}\right)^3\sin\theta, \\ B_\phi &= 0, \end{align*} $$ where $\theta$ is polar (colatitude) angle (relative to the magnetic North pole), $\phi$ is the azimuthal angle (longitude), and $R_\mathrm{E}$ is the Earth's radius, about 6370 km. See below for a derivation of these formulae. The Earth Impact Database is a collection of images, publications and abstracts that provides information about confirmed impact structures for the scientific community. It is hosted at the Planetary and Space Science Centre (PASSC) of the University of New Brunswick. This small Python project is a physical simulation of two-dimensional physics. The animation is carried out using Matplotlib's FuncAnimation method and is implemented by the class Simulation. Each "particle" of the simulation is represented by an instance of the Particle class and depicted as a circle with a fixed radius which undergoes elastic collisions with other particles. The following code attempts to pack a predefined number of smaller circles (of random radii between two given limits) into a larger one.
Graduate seminar on Advanced topics in PDE Organizers Prof. Dr. Herbert Koch Prof. Dr. Christoph Thiele Dr. Alex Amenta João Pedro Ramos Schedule This seminar takes place regularly on Fridays, at 14.00 (c.t.), in Raum 0.011, Endenicher Allee 60. July 12th - Joris Roos Title: Discrete analogues of maximally modulated singular integrals of Stein-Wainger type Abstract: Stein and Wainger introduced an interesting class of maximal oscillatory integral operators related to Carleson's theorem. This talk is about the L^2 theory for discrete analogues of some of these operators. This problem features a number of new and substantial difficulties arising from a curious fusion of number theory and analysis. Our approach is building on work of Krause (2018) and Krause-Lacey (2015). A key ingredient is a recent variation-norm estimate (Guo-Roos-Yung 2017). July 5th - Francesco di Plinio Title: Directional Carleson sequences: weighted estimates and applications Abstract: I will present a notion of Carleson norm for sequences indexed by families of tubes pointing along a certain set of directions. I will discuss weighted (and unweighted) inequalities for these Carleson sequences. Applications include: the sharp form of Meyer's lemma on directional square functions; sharply quantified Rubio de Francia type estimates for Fourier restrictions to directional sets; quantified or sharply quantified weighted and unweighted inequalities for the maximal directional function and the maximal directional Hilbert transform. Joint work with Natalia Accomazzo (partly) and Ioannis Parissis (in full) of University of Basque Country. June 28th - Bas Nieraeth Title: Weighted theory and extrapolation for multilinear operators Abstract: In one of its forms in the linear case, Rubio de Francia's extrapolation theorem states that if an operator T is bounded on on $L^q(w)$ for a single $1≤ q<\infty$ for all weights w in the Muckenhoupt class $A_q$, then T is in fact bounded on $L^p(w)$ for all $1 <% p < \infty$ for all w in the Muckenhoupt class $A_p$. In recent developments, motivated by operators such as the bilinear Hilbert transform, multilinear versions of this result have appeared. In this talk I will discuss the recent multilinear extrapolation result I obtained. The proof here differs from the proofs given in the works of Cruz-Uribe, Martell [2017], Li, Martell, Ombrosi [2018], and the recent work of Li, Martell, Martikainen, Ombrosi, Vuorinen [2019] in that it does not rely on any linear off-diagonal extrapolation techniques. Rather, a multilinear analogue of the Rubio de Francia algorithm was developed, leading to an extrapolation theorem that includes the endpoints as well as a sharp dependence result with respect to the involved multilinear Muckenhoupt constants. June 21st - Denis Brazke Title: BMO and Carleson measures on Riemannian manifolds Abstract: Let M be a Riemannian n-manifold with a metric such that the manifold is Ahlfors-regular. We also assume either non-negative Ricci curvature, or that the Ricci curvature is bounded from below together with a bound on the gradient of the heat kernel. We characterize BMO-functions on M by a Carleson measure condition of their σ-harmonic extension. As an application we show that the famous theorem of Coifman--Lions--Meyer--Semmes holds in this class of manifolds, which now follows from the arguments for commutators recently proposed by Lenzmann and Schikorra using only harmonic extensions, integration by parts, and trace space characterizations.This is joint work with Armin Schikorra and Yannick Sire. June 14th - Matthew de Courcy-Ireland Title: Energy, packing, and linear programming Abstract: The sphere packing problem is to arrange equal-sized spheres without overlap so as to occupy the greatest fraction of volume possible. Energy minimization is a closely related problem where overlap is allowed, at a price, and the goal is to minimize the total cost. Linear programming gives bounds on how well one can do at both problems. The linear program has exact solutions in Euclidean space of dimension 8 and 24, found by Cohn-Kumar-Miller-Radchenko-Viazovska. The goal of this talk is to introduce these problems and describe some ongoing work joint with Henry Cohn and Ganesh Ajjanagadde concerning the high-dimensional case. June 7th - No talk (Pre-pentecost break) May 31st - Friedrich Littman Title: Concentration inequalities for bandlimited functions Abstract: This talk considers the following problem: How much of an integral norm of a function with compactly supported Fourier transform can be concentrated on a sparse set?The resulting inequalities have explicit bounds that depend on the size of the support of the transform and on a measure of sparsity of the set. I will describe some applications from analytic number theory, signal processing, and Lagrange interpolation, and will outline existing strategies (building on work of Selberg and of Donoho and Logan) to obtain concentration inequalities. May 24th - Ziping Rao Title: Blowup stability for wave equations with power nonlinearity Abstract: We introduce the method of similarity coordinates tostudy the stability of ODE blowup solutions of wave equations with powernonlinearity in the lightcone. We first recall stability results inhigher Sobolev spaces. In this case, using the Lumer--Philips theorem weobtain a solution semigroup to the Cauchy problem. Then by theGearhart--Prüss theorem we obtain enough decay of the semigroup tocontrol the nonlinearity. Then we show stability of the ODE blowup forthe energy critical equation in energy space, by establishing Strichartzestimates in similarity coordinates. In this case the Gearhart--Prüsstheorem does not give a useful bound. Hence we need to construct anexplicit expression of the semigroup, from which we are finally able toprove Strichartz estimates and an improved energy estimate to controlthe nonlinearity in the energy space. The result in the energy criticalcase in $d=5$ is by Roland Donninger and myself (the pioneering work ofthe $d=3$ case is by Roland Donninger). May 17th - Talk Cancelled May 10th - Felipe Gonçalves Title: Broken Symmetries of the Schrodinger Equation and Strichartz Estimates Abstract: This will be short talk where we report some of the partial results of anongoing work with Don Zagier. We study the Schrodinger equation from thepoint of view of Hermite and Laguerre expansions and establish adiagonalization result for initial data with prescribed parity in 3dimensions that present exotic and unexpected associated eigenvalues. Inparticular, we derive a sharpened inequality for the one dimensionalStrichartz inequality for even initial data. For odd initial data we provethe extremizer is the derivative of a Gaussian. We remark this is stillunfinished work, and some questions are still left to be answered, so theaudience is more than welcome to ask all sorts of questions. May 3rd - Alexander Volberg Title: Bi-parameter Carleson embedding Abstract: Nicola Arcozzi, Pavel Mozolyako, Karl-Mikael Perfekt, GiuliaSarfatti recently gave the proof of a bi-parameter Carleson embeddingtheorem. Their proof uses heavily the notion of capacity on bi-tree. Inthis note we give one more proof of a bi-parameter Carleson embeddingtheorem that avoids the use of bi-tree capacity. Unlike the proof on asimple tree that used the Bellman function technique, the proof hereis based on some rather subtle comparison of energies of measures onbi-tree. The bi-tree Carleson embedding theorem turns out to be verydifferent from the usual one on a simple tree. In particular, varioustypes of Carleson conditions are not equivalent in general forbi-parameter case. April 26th - Wiktoria Zatoń Title: On the well-posedness for higher order parabolic equations with rough coefficients Abstract: In the first part we study the existence and uniqueness of solutions to the higher order parabolic Cauchy problems on the upper half space, given by $\partial_t u = (-1)^{m+1} \mbox{div}_m A(t,x)\nabla^m u$ and $L^p$ initial data space. The (complex) coefficients are only assumed to be elliptic and bounded measurable. Our approach follows the recent developments in the field for the case $m=1$.In the second part we consider the $BMO$ space of initial data. We will see that the Carleson measure condition$$\sup_{x\in \mathbb{R}^n} \sup_{r>0} \frac{1}{|B(x,r)|}\int_{B(x,r)}\int_0^{r}|t^m\nabla^m u(t^{2m},x)|^2\frac{dxdt}{t}<\infty$$provides, up to polynomials, a well-posedness class for $BMO$. In particular, since the operator $L$ is arbitrary, this also leads to a new, broad Carleson measure characterization of $BMO$ in terms of solutions to the parabolic system. April 19th - No talk (Karfreitag) April 12th - Alex Amenta Title: Banach-valued modulation-invariant Carleson embeddings and outer measure spaces: the Walsh case Abstract: Consider three Banach spaces $X_0, X_1, X_2$, linked with a bounded trilinear form $\Pi : X_0 \times X_1 \times X_2 \to \mathbb{C}$. Given this data one can define Banach-valued analogues of thebilinear Hilbert transform and its associated trilinear form. Using the Do-Thiele theory of outer $L^p$-spaces, $L^p$-bounds for these objects can be reduced to modulation-invariant Carleson embeddings of $L^p(\mathbb{R};X_v)$ into appropriate outer $L^p$-spaces. We prove such embeddings in the Banach-valued setting for a discrete model of the real line, the 3-Walsh group. Joint work with Gennady Uraltsev (Cornell).
An RLC circuit is a simple electric circuit with a resistor, inductor and capacitor in it -- with resistance R, inductance Land capacitance C, respectively. It's one of the simplest circuits that displays non-trivial behavior. You can derive an equation for the behavior by using Kirchhoff's laws (conservation of the stocks and flows of electrons) and the properties of the circuit elements. Wikipedia does a fine job. You arrive at a solution for the current as a function of time that looks generically like this (not the most general solution, but a solution): i(t) = A e^{\left( -\alpha + \sqrt{\alpha^{2} - \omega^{2}} \right) t} $$ with $\alpha = R/2L$ and $\omega = 1/\sqrt{L C}$. If you fill in some numbers for these parameters, you can get all kinds of behavior: As you can tell from that diagram, the Kirchhoff conservation laws don't in any way nail down the behavior of the circuit. The values you choose for R, Land Cdo. You could have a slowly decaying current or a quickly oscillating one. It depends on R, Land C. Now you may wonder why I am talking about this on an economics blog. Well, Cullen Roche implicitly asked a question: Although [stock flow consistent models are] widely used in the Fed and on Wall Street it hasn’t made much impact on more mainstream academic economic modeling techniques for reasons I don’t fully know. The reason is that the content of stock flow consistent modeling is identical to Kirchhoff's laws. Currents are flows of electrons (flows of money); voltages are stocks of electrons (stocks of money). Krichhoff's laws do not in any way nail down the behavior of an RLC circuit. SFC models do not nail down the behavior of the economy. If you asked what the impact of some policy was and I gave you the graph above, you'd probably never ask again. What SFC models do in order to hide the fact that anything could result from an SFC model is effectively assume R = L = C = 1, which gives you this: I'm sure to get objections to this. There might even be legitimate objections. But I ask of any would-be objector: How is accounting for money different from accounting for electrons? Before saying this circuit model is in continuous time, note that there are circuits with clock cycles -- in particular the device you are currently reading this post with. I can't for the life of me think of any objection, and I showed exactly this problem with a SFC model from Godley and Lavoie: But to answer Cullen's implicit question -- as the two Nick Rowe is generally better than me at these things. Mathematicanotebooks above show, SFC models don't specify the behavior of an economy without assuming R = L = C = 1 ... that is to say Γ = 1. Update: Nick Rowe is generally better than me at these things.
Every $LL(k)$ grammar is $LR(k)$, but there are $LL(k)$ grammars which are not $LALR(k)$. There's a simple example in Parsing Theory by Sippu&Soisalon-Soininen $$\begin{align}S &\to a A a \mid b A b \mid a B b \mid b B a\\A &\to c \\B &\to c\end{align}$$ The language of this grammar is finite, so it is obviously $LL(k)$. (In this case, $LL(3)$.) The grammar is also $LR(1)$. However, the grammar is not $LALR(k)$ for any value of $k$. The canonical $LR(k)$ machine has two states with $LR(0)$ itemsets $\{[A\to c \cdot], [B\to c\cdot]\}$. These two states have different lookahead sets in each item, corresponding to the two different predecessor states before shifting $c$. The $LALR$ algorithm merges these two states, thereby losing the distinction between the lookahead sets. This produces two reduce-reduce conflicts. Since there is only one token of lookahead at this point, increasing $k$ would make no difference.
Is the following true? \begin{equation}\frac{\partial}{\partial X_{t+1}}E_t(f(X_{t+1}))=E_t(\frac{\partial}{\partial X_{t+1}}f(X_{t+1}))\tag{1}\end{equation} where $f$ is some affine function (e.g., $f(x)=a+bx$), $E_t(X_s)=E(X_s|I_t)$ denotes the conditional expectation of $X_s$ given information $I_t$ in time period $t$, and where $X_s$ denotes the value of some variable $X$, say financial asset holdings, in time period $s$, such that $X_s$ is unknown (i.e., stochastic) given information at time periods $0\leq t<s$ but known otherwise. Edit 3: I think we should view $X_{t+1}$ as a function of information in time period $t$, i.e. $X_{t+1}=X_{t+1}(I_t)$. This hinders $(1)$ from being zero (see Alecos' comment below). If this is the right way to look at the problem (see "Edit 2" below; I do not know how to look at the problem, that's why I'm asking!) then $(1)$ may be written in detail as \begin{equation}\frac{\partial}{\partial X_{t+1}}E(f(X_{t+1}(I_t))|I_t)=E(\frac{\partial}{\partial X_{t+1}}f(X_{t+1}(I_t))|I_t).\tag{1'}\end{equation} I often notice that $(1)$ (or $(1')$) seems to be used when studying real business cycle models with uncertainty where the Lagrange method is applied (this framwork is outlined in e.g. Gregory C. Chow's Dynamic Economics: Optimization by the Lagrange Method). For example, we may have a situation where we have to differentiate \begin{equation}-\lambda_t A_{t+1}+E_t(\lambda_{t+1}(1+r_t)A_{t+1})+\Phi\tag{2}\end{equation} w.r.t. financial asset holdings $A_{t+1}$ in time period $t+1$, where $\lambda_t,\lambda_{t+1}\geq 0$, $r_t\in\mathbb{R}$ and $\Phi$ are independent of $A_{t+1}$ (i.e., they are not functions of $A_{t+1}$), and seem to use $(1)$ to arrive at the conclusion that the partial derivative of $(2)$ w.r.t. $A_{t+1}$ is \begin{equation}-\lambda_t+E_t(\lambda_{t+1}(1+r_t)).\tag{3}\end{equation} I neither understand if $(1)$ is used to justify the implication from $(2)$ to $(3)$, nor if $(1)$ is true, within the framework mentioned in the parenthesis above, but know of other treatments which begin by discussing measure theory and stochastic processes (e.g., Lecture notes for Macroeconomics I: Chapter 6), and then deriving similar, but not exactly analogous, first order conditions to that derived by using fact $(1)$ and the Lagrange method. Edit 1: I was asked to post a reference. The reference is lecture notes written by my lecturer. It says the following. If we have a representativve household's maximization problem $$\max_{\{C_t,A_{t+1}\}_{t=0}^{\infty}}E_0\sum_{t=0}^{\infty}\beta^tu(C_t)$$ subject to the budget constraint $$C_t+A_{t+1}=Y_t+(1+r)A_t,\quad\forall t\in\mathbb{Z}_{\geq 0},$$ then we want to investigate the first order conditions for the Lagrangian\begin{equation}\mathcal{L}=E_0\left[\sum_{t=0}^{\infty}\beta^tu(C_t)+\sum_{t=0}^{\infty}\lambda_t[Y_t+(1+r)A_t-C_t-A_{t+1}]\right].\tag{4}\end{equation} One first order condition is the partial derivative of $\mathcal{L}$ w.r.t. $A_{t+1}$: $$-\lambda_t+E_t[\lambda_{t+1}(1+r)].$$ (To be exact he writes that the above is a first order condition for the Lagrangian $\mathcal{L}$. I've interpreted that as meaning that he partially differentiates $\mathcal{L}$ w.r.t. $A_{t+1}$.) My question is then: Why is that true? Edit 2: I've also used the following reference: Chapter 5. Real Business Cycles. See equation $(5.7)$. How does the author derive that equation? Does he differentiate inside the conditional expectations, as expressed by me in $(1)$? Edit 4: To be more exact, $I_t$ may capture the value of e.g. output $Y_t$ (compare with the budget constraint above).
Let $A\in\mathbb{R}^{n\times n}$ is symmetric positive definite and consider solving linear system $Ax = b$. Show that the symmetric Gauss-Seidel iteration converges for any $x_0$. Solution - Since $A$ is symmetric positive definite and $A = D - L - U$ is the usual partitioning into diagonal, strict lower and strict upper triangular elements, we have $$P_{sgs} = (D - L)D^{-1}(D - U) = CC^T$$ where $C$ is nonsingular lower triangular with positive diagonal elements and it follows that $P_{sgs}$ is symmetric positive definite. Since $A$ and $P$ are both symmetric positive definite, then we can use the result that if the symmetric matrix $2P - A$ is positive definite then the iteration converges. Computing $2P - A$ yields \begin{align*} 2\left[ (D-L)D^{-1}(D - U)\right] - A \end{align*} Before I continue my professors solution has $U = L^T$ then after pushing around matrices $2P - A = A + 2LD^{-1}L^T$ in which we concludes that since this is the sum of two symmetric positive definite matrices and is therefore also symmetric positive definite. Therefore, Symmetric Gauss-Seidel iteration converges for any $x_0$. It is not intuitive to me why he lets $U = L^T$ and the algebra or shifting of matrices that yield the solution does not make any sense to me at all. Any suggestions is greatly appreciated.
Sparse probabilistic Boolean network problems: A partial proximal-type operator splitting method College of Mathematics and Computer Science, Fuzhou University, Fuzhou 350108, China The sparse probabilistic Boolean network (SPBN) model has been applied in various fields of industrial engineering and management. The goal of this model is to find a sparse probability distribution based on a given transition-probability matrix and a set of Boolean networks (BNs). In this paper, a partial proximal-type operator splitting method is proposed to solve a separable minimization problem arising from the study of the SPBN model. All the subproblem-solvers of the proposed method do not involve matrix multiplication, and consequently the proposed method can be used to deal with large-scale problems. The global convergence to a critical point of the proposed method is proved under some mild conditions. Numerical experiments on some real probabilistic Boolean network problems show that the proposed method is effective and efficient compared with some existing methods. Keywords:Sparse probabilistic Boolean network, $\ell_{\frac{1}{2}}$-regularization, separable optimization problem, method of multipliers, proximal point algorithm. Mathematics Subject Classification:Primary: 90B15, 90C52; Secondary: 90C90. Citation:Kangkang Deng, Zheng Peng, Jianli Chen. Sparse probabilistic Boolean network problems: A partial proximal-type operator splitting method. Journal of Industrial & Management Optimization, 2019, 15 (4) : 1881-1896. doi: 10.3934/jimo.2018127 References: [1] Y.-Q. Bai and K.-J. Shen, Alternating direction method of multipliers for $\ell1-\ell 2$ regularized Logistic regression model, [2] S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, [3] [4] X. Chen, W.-K. Ching and X.-S. Chen, Construction of probabilistic boolean networks from a prescribed transition probability matrix: A maximum entropy rate approach, [5] [6] Y.-H. Dai, D.-R. Han, X.-M. Yuan and W.-X. Zhang, A sequential updating scheme of Lagrange multiplier for separable convex programming, [7] J. Eckstein and M. Fukushima, Some reformulations and applications of the alternating direction method of multipliers, in [8] M. Fukushima, Application of the alternating direction method of multipliers to separable convex programming problems, [9] [10] D.-R. Han, X.-M. Yuan and W.-X. Zhang, An augmented Lagrangian based parallel splitting method for separable convex minimization with applications to image processing, [11] B.-S. He and X.-M. Yuan, Alternating direction method of multipliers for linear programming, [12] B.-S. He, M. Tao and X.-M. Yuan, Alternating direction method with Gaussian back substitution for separable convex programming, [13] I. Ivanov, R. Pal and E.-R. Dougherty, Dynamics preserving size reduction mappings for probabilistic Boolean networks, [14] K. Kobayashi and K. Hiraishi, An integer programming approach to optimal control problems in context-sensitive probabilistic Boolean networks, [15] K. Kobayashi and K. Hiraishi, A probabilistic approach to control of complex systems and its application to real-time pricing, [16] K. Kobayashi and K. Hiraishi, Verification of real-time pricing systems based on probabilistic Boolean networks, [17] [18] R. Liang, Y. Qiu and W.-K. Ching, Construction of probabilistic Boolean network for credit default data, Computational Sciences and Optimization (CSO), [19] [20] [21] Z. Peng and D.-H. Wu, A partial parallel splitting augmented Lagrangian method for solving constrained matrix optimization problems, [22] B.-E. Rhoades, S. Sessa, M.-S. Khan and M. Swaleh, On fixed points of asymptotically regular mappings, [23] I. Shmulevich, E.-R. Dougherty, S. Kim and W. Zhang, Probabilistic Boolean networks: A rule- based uncertainty model for gene regulatory networks, [24] I. Shmulevich, E-R. Dougherty and W. Zhang, From Boolean to probabilistic Boolean networks as models of genetic regulatory networks, [25] I. Shmulevich, E.-R. Dougherty and W. Zhang, Gene perturbation and intervention in probabilistic Boolean networks, [26] B. Tian, X.-Q. Yang and K.-W. Meng, An interior-point $\ell_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization, [27] [28] Z.-M. Wu, X.-J. Cai and D.-R. Han, Linearized block-wise alternating direction method of multipliers for multiple-block convex programming, [29] M.-H. Xu, Proximal alternating directions method for structured variational inequalities, [30] Z.-B. Xu, H. Guo, Y. Wang and H. Zhang, The representation of $\ell_{\frac{1}{2}}$ regularizer among $\ell_q (0 < q < 1)$ regularizer: an experimental study based on phase diagram, [31] Z.-B. Xu, X.-Y. Chang, F.-M. Xu and H. Zhang, $\ell_{\frac{1}{2}}$ regularization: a thresholding representation theory and a fast slover, [32] F.-M. Xu, Y.-H. Dai, Z.-H. Zhao and Z.-B. Xu, Efficient projected gradient methods for a class of $\ell_0$ constrained optimization problems, [33] J. Yang, Y.-Q. Dai, Z. Peng, J.-P. Zhuang and W.-X. Zhu, A homotopy alternating direction method of multipliers for linearly constrained separable convex optimization, [34] J. Zeng, S. Lin, Y. Wang and Z.-B. Xu, $\ell_\frac{1}{2}$ Regularization: convergence of iterative half thresholding algorithm, [35] K.-Z. Zhang and L.-Z. Zhang, Controllability of probabilistic Boolean control networks with time-variant delays in states, show all references References: [1] Y.-Q. Bai and K.-J. Shen, Alternating direction method of multipliers for $\ell1-\ell 2$ regularized Logistic regression model, [2] S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers, [3] [4] X. Chen, W.-K. Ching and X.-S. Chen, Construction of probabilistic boolean networks from a prescribed transition probability matrix: A maximum entropy rate approach, [5] [6] Y.-H. Dai, D.-R. Han, X.-M. Yuan and W.-X. Zhang, A sequential updating scheme of Lagrange multiplier for separable convex programming, [7] J. Eckstein and M. Fukushima, Some reformulations and applications of the alternating direction method of multipliers, in [8] M. Fukushima, Application of the alternating direction method of multipliers to separable convex programming problems, [9] [10] D.-R. Han, X.-M. Yuan and W.-X. Zhang, An augmented Lagrangian based parallel splitting method for separable convex minimization with applications to image processing, [11] B.-S. He and X.-M. Yuan, Alternating direction method of multipliers for linear programming, [12] B.-S. He, M. Tao and X.-M. Yuan, Alternating direction method with Gaussian back substitution for separable convex programming, [13] I. Ivanov, R. Pal and E.-R. Dougherty, Dynamics preserving size reduction mappings for probabilistic Boolean networks, [14] K. Kobayashi and K. Hiraishi, An integer programming approach to optimal control problems in context-sensitive probabilistic Boolean networks, [15] K. Kobayashi and K. Hiraishi, A probabilistic approach to control of complex systems and its application to real-time pricing, [16] K. Kobayashi and K. Hiraishi, Verification of real-time pricing systems based on probabilistic Boolean networks, [17] [18] R. Liang, Y. Qiu and W.-K. Ching, Construction of probabilistic Boolean network for credit default data, Computational Sciences and Optimization (CSO), [19] [20] [21] Z. Peng and D.-H. Wu, A partial parallel splitting augmented Lagrangian method for solving constrained matrix optimization problems, [22] B.-E. Rhoades, S. Sessa, M.-S. Khan and M. Swaleh, On fixed points of asymptotically regular mappings, [23] I. Shmulevich, E.-R. Dougherty, S. Kim and W. Zhang, Probabilistic Boolean networks: A rule- based uncertainty model for gene regulatory networks, [24] I. Shmulevich, E-R. Dougherty and W. Zhang, From Boolean to probabilistic Boolean networks as models of genetic regulatory networks, [25] I. Shmulevich, E.-R. Dougherty and W. Zhang, Gene perturbation and intervention in probabilistic Boolean networks, [26] B. Tian, X.-Q. Yang and K.-W. Meng, An interior-point $\ell_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization, [27] [28] Z.-M. Wu, X.-J. Cai and D.-R. Han, Linearized block-wise alternating direction method of multipliers for multiple-block convex programming, [29] M.-H. Xu, Proximal alternating directions method for structured variational inequalities, [30] Z.-B. Xu, H. Guo, Y. Wang and H. Zhang, The representation of $\ell_{\frac{1}{2}}$ regularizer among $\ell_q (0 < q < 1)$ regularizer: an experimental study based on phase diagram, [31] Z.-B. Xu, X.-Y. Chang, F.-M. Xu and H. Zhang, $\ell_{\frac{1}{2}}$ regularization: a thresholding representation theory and a fast slover, [32] F.-M. Xu, Y.-H. Dai, Z.-H. Zhao and Z.-B. Xu, Efficient projected gradient methods for a class of $\ell_0$ constrained optimization problems, [33] J. Yang, Y.-Q. Dai, Z. Peng, J.-P. Zhuang and W.-X. Zhu, A homotopy alternating direction method of multipliers for linearly constrained separable convex optimization, [34] J. Zeng, S. Lin, Y. Wang and Z.-B. Xu, $\ell_\frac{1}{2}$ Regularization: convergence of iterative half thresholding algorithm, [35] K.-Z. Zhang and L.-Z. Zhang, Controllability of probabilistic Boolean control networks with time-variant delays in states, Stopping error Total iteration number 97 260 267 520 562 722 Identified major BNs 104 118 189 118 118 118 118 118 358 360 360 360 360 360 360 395 395 395 395 395 376 594 594 594 594 594 395 836 836 836 836 836 594 911 911 911 911 911 836 939 939 939 939 939 911 939 Stopping error Total iteration number 97 260 267 520 562 722 Identified major BNs 104 118 189 118 118 118 118 118 358 360 360 360 360 360 360 395 395 395 395 395 376 594 594 594 594 594 395 836 836 836 836 836 594 911 911 911 911 911 836 939 939 939 939 939 911 939 [1] [2] Chengxiang Wang, Li Zeng, Wei Yu, Liwei Xu. Existence and convergence analysis of $\ell_{0}$ and $\ell_{2}$ regularizations for limited-angle CT reconstruction. [3] Victor Churchill, Rick Archibald, Anne Gelb. Edge-adaptive $ \ell_2 $ regularization image reconstruction from non-uniform Fourier data. [4] Yunhai Xiao, Soon-Yi Wu, Bing-Sheng He. A proximal alternating direction method for $\ell_{2,1}$-norm least squares problem in multi-task feature learning. [5] Lianjun Zhang, Lingchen Kong, Yan Li, Shenglong Zhou. A smoothing iterative method for quantile regression with nonconvex $ \ell_p $ penalty. [6] Vladimir Chepyzhov, Alexei Ilyin, Sergey Zelik. Strong trajectory and global $\mathbf{W^{1,p}}$-attractors for the damped-driven Euler system in $\mathbb R^2$. [7] [8] [9] [10] Boshi Tian, Xiaoqi Yang, Kaiwen Meng. An interior-point $l_{\frac{1}{2}}$-penalty method for inequality constrained nonlinear optimization. [11] Guoyuan Chen, Yong Liu, Juncheng Wei. Nondegeneracy of harmonic maps from $ {{\mathbb{R}}^{2}} $ to $ {{\mathbb{S}}^{2}} $. [12] [13] Valeria Banica, Luis Vega. Singularity formation for the 1-D cubic NLS and the Schrödinger map on $\mathbb S^2$. [14] Florin Diacu, Shuqiang Zhu. Almost all 3-body relative equilibria on $ \mathbb S^2 $ and $ \mathbb H^2 $ are inclined. [15] Jaime Angulo Pava, César A. Hernández Melo. On stability properties of the Cubic-Quintic Schródinger equation with $\delta$-point interaction. [16] Tingting Wu, Jian Gao, Yun Gao, Fang-Wei Fu. $ {{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{2}}{{\mathbb{Z}}_{4}}$-additive cyclic codes. [17] Tuan Anh Dao, Michael Reissig. $ L^1 $ estimates for oscillating integrals and their applications to semi-linear models with $ \sigma $-evolution like structural damping. [18] Jiao Du, Longjiang Qu, Chao Li, Xin Liao. Constructing 1-resilient rotation symmetric functions over $ {\mathbb F}_{p} $ with $ {q} $ variables through special orthogonal arrays. [19] Peng Mei, Zhan Zhou, Genghong Lin. Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations. [20] Jaume Llibre, Y. Paulina Martínez, Claudio Vidal. Phase portraits of linear type centers of polynomial Hamiltonian systems with Hamiltonian function of degree 5 of the form $ H = H_1(x)+H_2(y)$. 2018 Impact Factor: 1.025 Tools Article outline Figures and Tables [Back to Top]
Sanaris's answer is a great, succinct list of what each term in the free energy expression stands for: I'm going to concentrate on the $T\,S$ term (which you likely find the most mysterious) and hopefully give a little more physical intuition. Let's also think of a chemical or other reaction, so that we can concretely talk about a system changing and thus making some of its internal energy $H=U+p\,V$ available for work. The $T S$ term arises roughly from the energy that is needed to "fill up" the rotational, vibrational, translational and otherwise distractional thermal energies of the constituents of a system. Simplistically, you can kind of think of its being related to the idea that you must use some of the energy released to make sure that the reaction products are "filled up" with heat so that they are at the same temperature as the reactants. So the $T S$ term is related to, but not the same as, the notion of heat capacity: let's look at this a bit further. Why can't we get at all the energy $\Delta H$? Well, actually we can in certain contrived circumstances. It's just that these circumstances are not useful for calculating how much energy we can practically get to. Let's think of the burning of hydrogen: $$\rm H_2+\frac{1}{2} O_2\to H_2O;\quad\Delta H \approx 143{\rm kJ\,mol^{-1}}\tag{1}$$ This is a highly exothermic one, and also one of the reactions of choice if you want to throw three astronauts, fifty tons of kit and about a twentieth of the most advanced-economy-in-the-world’s-1960s GDP at the Moon. The thing about one mole of $H_2O$ is that it can soak up less heat than the mole of $H_2$ and half a mole of $O_2$; naively this would seem to say that we can get more heat than the enthalpy change $\Delta H$, but this is not so. We imagine a thought experiment, where we have a gigantic array of enormous heat pads (all individually like an equilibrium “outside world") representing all temperatures between absolute zero and $T_0$ with a very fine temperature spacing $\Delta T$ between them. On my darker days I find myself imagining an experimental kit that looks eerily like a huge pallet on wheels of mortuary shelves, sliding in and out as needed by the experimenter! We bring the reactants into contact with the first heat pad, which is at a temperature $T_1 = T_0 - \Delta T$ a teeny-tiny bit cooler than $T_0$ thus reversibly drawing some heat $\Delta Q(T_1)$ off into the heat pad. Next, we bring the reactants into contact with the second heat pad at temperature $T_2 = T_0 - 2\,\Delta T$, thus reversibly drawing heat $\Delta Q(T_2)$ off into that heat pad. We keep cooling then shifting to the next lowest heat pad until we have visited all the heat pads and thus sucked all the heat off into our heat pads: see my sketch below: Now the reactants are at absolute zero. There is no heat needed to "fill them up" to their temperature, so we can extract all the enthalpy $\Delta H$ from the reaction as useful work. Let's imagine we can put this work aside in some ultrafuturistic perfect capacitor, or some such lossless storage for the time being. Now we must heat our reaction products back up to standard temperature, so that we know what we can get out of our reaction if the conditions do not change. So, we simply do the reverse, as sketched below: Notice that I said that $H_2O$ soaks up less heat than the reactants. This means that, as we heat the products back up to standard temperature, we take from the heat pads less heat in warming up the water than we put into them in cooling the reactants down. So far, so good. We have gotten all the work $\Delta H$ out into our ultracapacitor without losing any! And we're back to our beginning conditions, or so it seems! What's the catch? The experimental apparatus that let us pull this trick off is NOT back at its beginning state. We have left heat in the heat pads. We have thus degraded them: they have warmed up ever so slightly and so cannot be used indefinitely to repeatedly do this trick. If we tried to do the trick too many times, eventually the heat pads would be at ambient temperature and would not work any more. So we haven’t reckoned the free energy at the standard conditions, rather we have simply calculated the free energy $\Delta H$ available in the presence of our unrealistic heat sink array. To restore the system to its beginning state and calculate what work we could get if there were no heat sink array here, we must take away the nett heat flows we added to all the heat pads and send them into the outside World at temperature $T_0$. This is the only "fair" measure, because it represents something that we could do with arbitrarily large quantities of reactants. But the outside World at $T_0$ is warmer than any of the heat pads, so of course this heat transfer can’t happen spontaneously, simply by dent of Carnot’s statement of the second law! We must therefore bring in a reversible heat pump and use some of our work $\Delta H$ to pump this heat into the outside world to restore standard conditions: we would connect an ideal reversible heat pump to each of the heat pads in turn and restore them to their beginning conditions, as sketched below: This part of the work that we use to run the heat pump and restore all the heat pads, if you do all the maths, is exactly the $T\,\Delta S$ term. The above is a mechanism whereby the following statement in Jabirali's Answer holds: Processes that increase the Gibbs free energy can be shown to increase the entropy of the system plus its surroundings, and will therefore be prevented by the second law of thermodynamics. The nice thing about the above is that it is a great way to look at endothermic reactions. In an endothermic reaction, we imagine having an energy bank that we can borrow from temporarily. After we have brought the products back up to temperature, we find we have both borrowed $-\Delta H$ from the energy bank and put less heat back into the heat pads than we took from them. So heat can now flow spontaneously from the environment to the heat pads to restore their beginning state, because the heat pads are all at a lower temperature than the environment. As this heat flows, we can use a reversible heat engine to extract work from the heat flowing down the gradient. This work is, again, $-T\,\Delta S$, which is a positive work gotten from the heat flowing down the temperature gradient. The $-T\,\Delta S$ can be so positive that we can pay back the $\Delta H$ we borrowed and have some left over. If so, we have an endothermic reaction, and a nett free energy: this energy coming from the heat flowing spontaneously inwards from the environment to fill the higher entropy products (higher than the entropy of the reactants). Take heed that, in the above, I have implicitly assumed the Nernst Heat Postulate -the not quite correct third law of thermodynamics - see my answer here for more details. For the present discussion, this approximate law is well good enough.
Bases of a trapezoid: \(a\), \(b\) Legs of a trapezoid: \(c\), \(d\) Midline of a trapezoid: \(m\) Altitude of a trapezoid: \(h\) Perimeter: \(P\) Legs of a trapezoid: \(c\), \(d\) Midline of a trapezoid: \(m\) Altitude of a trapezoid: \(h\) Perimeter: \(P\) Diagonals of a trapezoid: \(p\), \(q\) Angle between the diagonals: \(\varphi\) Radius of the circumscribed circle: \(R\) Radius of the inscribed circle: \(r\) Area: \(S\) Angle between the diagonals: \(\varphi\) Radius of the circumscribed circle: \(R\) Radius of the inscribed circle: \(r\) Area: \(S\) A trapezoid (or a trapezium) is a quadrilateral in which (at least) one pair of opposite sides is parallel. Sometimes a trapezoid is defined as a quadrilateral having exactly one pair of parallel sides. The parallel sides are called the bases, and two other sides are called the legs. A trapezoid in which the legs are equal is called an isosceles trapezoid. A trapezoid in which at least one angle is the right angle (\(90^\circ\)) is called a right trapezoid. The midline of a trapezoid is parallel to the bases and equal to the arithmetic mean of the lengths of the bases. \(m = {\large\frac{{a + b}}{2}\normalsize},\;\) \(m\parallel a,\;\) \(m\parallel b\) Diagonals of a trapezoid (if \(a \gt b\)) \(p = \sqrt {\large\frac{{{a^2}b – a{b^2} – b{c^2} + a{d^2}}}{{a – b}}\normalsize},\;\) \(q = \sqrt {\large\frac{{{a^2}b – a{b^2} – b{d^2} + a{c^2}}}{{a – b}}\normalsize} \) Perimeter of a trapezoid \(P = a + b + c + d\) Area of a trapezoid \(S = {\large\frac{{a + b}}{2}\normalsize} h = mh\) \(S =\) \({\large\frac{{a + b}}{2}\normalsize} \) \(\sqrt {{c^2} – {{\left[ {\large\frac{{{{\left( {a – b} \right)}^2} + {c^2} – {d^2}}}{{2\left( {a – b} \right)}}\normalsize} \right]}^2}} \) All four vertices of an isosceles trapezoid lie on a circumscribed circle. Radius of the circle circumscribed about an isosceles trapezoid \(R =\) \({\large\frac{{c\sqrt {ab + {c^2}} }}{{\sqrt {\left( {2c – a + b} \right)\left( {2c + a – b} \right)} }}\normalsize}\) Diagonal of an isosceles trapezoid \(p = \sqrt {ab + {c^2}} \) Altitude of an isosceles trapezoid \(h = \sqrt {{c^2} – {\large\frac{1}{4}\normalsize}{{\left( {a – b} \right)}^2}} \) If the sum of the bases of a trapezoid is equal to the sum of its legs, all four sides of the trapezoid are tangents to an inscribed circle: \(a + b = c + d\) Radius of the inscribed circle \(r = {\large\frac{h}{2}\normalsize},\) where \(h\) is the altitude of the trapezoid.
So I wrote somewhat tongue-in-cheek blog post a few years ago titled "Resolving the Cambridge capital controversy with abstract algebra" [RCCC I] that called the Cambridge Capital Controversy [CCC] for Cambridge, UK in terms of the original debate they they were having — summarized by Joan Robinson's claim that you can't really add apples and oranges (or in this case printing presses and drill presses) to form a sensible definition of capital. I used a bit of group theory and the information equilibrium framework to show that you can't simply add up factors of production. I mentioned at the bottom of that post that there are really easy ways around it — including a partition function approach in my paper — but Cambridge, MA (Solow and Samuelson) never made those arguments. On the Cambridge, MA side no one seemed to care because the theory seemed to "work" (debatable). A few years passed and eventually Samuelson conceded Robinson and Sraffa were in fact right about their re-switching arguments. A short summary is available in an NBER paper from Baqaae and Farhi, but what interested me about that paper was that the particular way they illustrated it made it clear to me that the partition function approach also gets around the re-switching arguments. So I wrote that up in a blog post with another snarky title "Resolving the Cambridge capital controversy with MaxEnt" [RCCC II] (a partition function is maximum entropy distribution or MaxEnt). This of course opened a can of worms on Twitter when I tweeted out the link to my post. The first volley was several people saying Cobb-Douglas functions were just a consequence of accounting identities or that they fit any data — a lot of which was based on papers by Anwar Shaikh (in particular the "humbug" production function). I added an update to my post saying these arguments were disingenuous — and in my view academic fraud because they rely on a visual misrepresentation of data as well as a elision of the direction of mathematical implication. Solow pointed out the former in his 1974 response to Shaikh's "humbug" paper (as well as the fact that Shaikh's data shows labor output is independent of capital which would render the entire discussion moot if true), but Shaikh has continued to misrepresent "humbug" until at least 2017 in an INET interview on YouTube. The funny thing is that I never really cared about the CCC — my interest on this blog is research into economic theory based on information theory. RCCC I and RCCC II were both primarily about how you would go about addressing the underlying questions in the information equilibrium framework. However, the subsequent volleys have brought up even more illogical or plainly false arguments against aggregate production functions that seem to have sprouted in the Post-Keynesian walled garden. I believe it's because "mainstream" academic econ has long since abandoned arguing about it, and like my neglected back yard a large number of weeds have grown up. This post is going to do a bit of weeding. Constant factor shares! Several comments brought up that Cobb-Douglas production functions can fit any data assuming(empirically observed) constant factor shares. However, this is just a claim that the gradient \nabla = \left( \frac{\partial}{\partial \log L} , \frac{\partial}{\partial \log K} \right) $$ is constant, which a fortioriimplies a Cobb-Douglas production function $$ \log Y = a \log L + b \log K + c $$ A backtrack is that it's only constant factor shares in the neighborhood of observed values, but that just means Cobb-Douglas functions are a local approximation (i.e. the tangent plane in log-linear space) to the observed region. Either way, saying "with constant factor shares, Cobb Douglas can fit any data" is saying vacuously "data that fits a Cobb-Douglas function can be fit with a Cobb-Douglas function". Leontief production functions also have constant factor shares locally, but in fact have two tangent planes, which just retreats to the local description (data that is locally Cobb-Douglas can be fit with a local Cobb-Douglas function). Aggregate production functions don't exist! The denial that the functions even exist is by far the most interesting argument, but it's still not logically sound. At least it's not disingenuous — it could just use a bit of interdisciplinary insight. Jo Michell linked me to a paper by Jonathan Temple with the nonthreatening title "Aggregate production functions and growth economics" (although the filename is "Aggreg Prod Functions Dont Exist.Temple.pdf" and the first line of the abstract is "Rigorous approaches to aggregation indicate that aggregate production functions do not exist except in unlikely special cases.") However, not too far in (Section 2, second paragraph) it makes a logical error of extrapolating from $N = 2$ to $N \gg 1$: It is easy to show that if the two sectors each have Cobb-Douglas production technologies, and if the exponents on inputs differ across sectors, there cannot be a Cobb-Douglas aggregate production function. It's explained how the argument proceeds in a footnote: The way to see this is to write down the aggregate labour share as a weighted average of labour shares in the two sectors. If the structure of output changes, the weights and the aggregate labour share will also change, and hence there cannot be an aggregate Cobb-Douglas production function (which would imply a constant labour share at the aggregate level). This is true for $N = 2$, because the change of one "labor share state" (specified by $\alpha_{i}$ for a individual sector $y_{i} \sim k^{\alpha_{i}}$) implies an overall change in the ensemble average labor share state $\langle \alpha \rangle$. However, this is a bit like saying if you have a two-atom ideal gas, the kinetic energy of one of the atoms can change and so the average kinetic energy of the two-atom gas doesn't exist therefore (rigorously!) there is no such thing as temperature (i.e. a well defined kinetic energy $\sim k T$) for an ideal gas in general with more than two atoms ($N \gg 1$) except in unlikely special cases. I was quite surprised that econ has disproved the existence of thermodynamics! Joking aside, if you have more than two sectors, it is possible you could have an empirically stable distribution over labor share states $\alpha_{i}$ and a partition function (details of the approach appear in my paper): Z(\kappa) = \sum_{i} e^{- \kappa \alpha_{i}} $$ take $\kappa \equiv \log (1+ (k-k_{0})/k_{0})$ which means $$ \langle y \rangle \sim k^{\langle \alpha \rangle} $$ where the ensemble average is $$ \langle X \rangle \equiv \frac{1}{Z} \sum_{i} \hat{X} e^{- \kappa \alpha_{i}} $$ There are likely more ways than this partition function approach based on information equilibrium to get around the $N = 2$ case, but we only need to construct one example to disprove nonexistence. Basically this means that unless the output structure of a single firm affects the whole economy, it is entirely possible that the output structure of an ensemble of firms could have a stable distribution of labor share states. You cannot logicallyrule it out. What's interesting to me is that in a whole host of situations, the distributions of these economic states appear to be stable (and in some cases in an unfortunate pun, stable distributions). For some specific examples, we can look at profit rate states and stock growth rate states. Now you might not believe these empirical results. Regardless, the logicalargument is not valid unless your model of the economy is unrealistically extremely simplistic (like modeling a gas with a single atom — not too unlike the unrealistic representative agent picture). There is of course the possibility that empiricallythis doesn't work (much like it doesn't work for a whole host of non-equilibrium thermodynamics processes). But Jonathan Temple's paper is a bunch of wordy prose with the odd equation — it does not address the empirical question. In fact, Temple re-iterates one of the defenses of the aggregate production function approaches that has vexed these theoretical attempts to knock them down (section 4, first paragraph): One of the traditional defenses of aggregate production functions is a pragmatic one: they may not exist, but empirically they ‘seem to work’. They of course would seem to work if economies are made up of more than two firms (or sectors) and have relatively stable distributions of labor share states. To put it yet another way, Temple's argument relies on a host of unrealistic assumptions about an economy — that we know the distribution isn't stable, and that there are only a few sectors, and that the output structure of these few firms changes regularly enough to require a new estimate of the exponent $\alpha$ but not regularly enough that the changes create a temporal distribution of states. Fisher! Aggregate production functions are highly constrained! There's a lot of references that trace all the way back to Fisher (1969) "The existence of aggregate production functions" and several people who mentioned Fisher or work derived from his papers. The paper is itself a survey of restrictions believed to constrain aggregate production functions, but it seems to have been written from the perspective that an economy is a highly mathematical construct that can either only be described by $C^{2}$ functions or not at all. In a later section (Sec. 6) talking about whether maybe aggregate production functions can be good approximations, Fisher says: approximations could only result if [the approximation] ... exhibited very large rates of change ... In less technical language, the derivatives would have to wiggle violently up and down all the time. Heaven forbid were that the case! He cites in a footnote the rather ridiculous example of $\lambda \sin (x/\lambda)$ (locally $C^{2}$!) — I get the feeling he was completely unaware of stochastic calculus or quantum mechanics and therefore could not imagine a smooth macroeconomy made up of noisy components, only a few pathological examples from his real analysis course in college. Again, a nice case for some interdisciplinary exchange! I wrote a post some years ago about the $C^{2}$ view economists seem to take versus a far more realistic noisy approach in the context of the Ramsey-Cass-Koopmans model. In any case, why exactly should we expect firm level production functions to be $C^{2}$ functions that add to a $C^{2}$ function? One of the constraints Fisher notes is that individual firm production functions (for the $i^{th}$ firm) must take a specific additive form: f_{i}(K_{i}, L_{i}) = \phi_{i}(K_{i}) + \psi_{i}(L_{i}) $$ This is probably true if you think of an economy as one large $C^{2}$ function that has to factor (mathematically, like, say, a polynomial) into individual firms. But like Temple's argument, it denies the possibility that there can be stable distributions of states $(\alpha_{i}, \beta_{i})$ for individual firm production functions (that even might change over time!) such that Y_{i} = f_{i}(K_{i}, L_{i}) = K_{i}^{\alpha_{i}}L_{i}^{\beta_{i}} $$ but \langle Y \rangle \sim K^{\langle \alpha \rangle} L^{\langle \beta \rangle} $$ The left/first picture is a bunch of random production functions with beta distributed exponents. The right/second picture is an average of 10 of them. In the limit of an infinite number of firms, constant returns to scale hold (i.e. $\langle \alpha \rangle + \langle \beta \rangle \simeq 0.35 + 0.65 = 1$) at the macro level — however individual firms aren't required to have constant returns to scale (many don't in this example). In fact, none of the individual firms have to have any of the properties of the aggregate production function. (You don't really have to impose that constraint at either scale — and in fact, in the whole Solow model works much better in terms of nominal quantities and without constant returns to scale.) Since these are simple functions, they don't have that many properties but we can include things like constant factor shares or constant returns to scale. empirically The information-theoretic partition function approach actually has a remarkable self-similarity between macro (i.e. aggregate level) and micro (i.e. individual or individual firm level) — this self-similarity is behind the reason why Cobb-Douglas or diagrammatic ("crossing curve") models at the macro scale aren't obviously implausible. Both the arguments of Temple and Fisher seem to rest on strong assumptions about economies constructed from clean, noiseless, abstract functions — and either a paucity or surfeit of imagination (I'm not sure). It's a kind of love-hate relationship with neoclassical economics — working within its confines to try to show that it's flawed. A lot of these results are cases of what I personally would call mathiness. I'm sure Paul Romer might think they're fine, but to me they sound like an all-too-earnest undergraduate math major fresh out of real analysis trying to tell us what's what. Sure, man, individual firms production functions are continuous and differentiable additive functions. So what exactly have you been smoking? These constraints on production functions from Fisher and Temple actually remind me a lot of Steve Keen's definition of an equilibrium that isn't attainable — it's mathematically forbidden! It's probably not a good definition of equilibrium if you can't even come up with a theoreticalcase that satisfies it. Fisher and Temple can't really come up with a theoretical production function that meets all their constraints besides the trivial "all firms are the same" function. It's funny that Fisher actually touches on that in one of his footnotes (#31): Honesty requires me to state that I have no clear idea what technical differences actually look like. Capital augmentation seems unduly restrictive, however. If it held, all firms would produce the same market basket of outputs and hire the same relative collection of labors. But the bottom line is that these claims to have exhausted all possibilities are just not true! I get the feeling that people have already made up their minds which side of the CCC they stand on, and it doesn't take much to confirm their biases so they don't ask questions after e.g. Temple's two sector economy. That settles it then!Well, no ... as there might be more than two sectors. Maybe even three!
I'm trying to implement finite-difference WENO method to progressively complex equations and systems. At the moment I'm successful with singular scalar equations (constant advection and Burgers), and trying to advance to 1D system of Euler equations. I've read that there are two approaches to apply WENO to systems of conservation laws: 1) Component-wise (by applying it to primitive values (rho, rho*u, e)). They say that this method is OK for simple test cases. 2) Characteristic-wise (by proper decomposition via eigen vectors of flux Jacobian etc). This method is said to be necessary for more complex flows. Before trying complex characteristic decomposition, I try to implement simpler component-wise method. So far it work acceptably well on test case with contact discontinuities (i.e. only density varies while pressure and speed are constant) - it advects them with some smearing. But when I introduce even a weak shock, it generates growing instabilities near its front and computing collapses after some time. Is it normal behavior of the method, or should it be able to compute shocks as well (maybe with some oscillations or much smearing etc)? Maybe the problem is that I use non-splitting flux function of Lax-Friedrich? Here is this flux function: $$ h(a,b) = \frac{1}{2} \left( f(a) + f (b) - \alpha (b-a) \right) $$ where $\alpha = \text{max}|\lambda|$ is maximum eigenvalue.
This question already has an answer here: How does one prove that the Dirac Delta distribution is the eigenfunction of the position operator $\hat{x}$? In math, why does $\langle x’|x\rangle = \delta(x’-x)$? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: How does one prove that the Dirac Delta distribution is the eigenfunction of the position operator $\hat{x}$? In math, why does $\langle x’|x\rangle = \delta(x’-x)$? This is true not just for position eigenvectors. This is true for all eigenvectors in their own (orthonormal) eigenbasis. For an operator with discrete eigenvalues $n$ with eigenvectors $|n\rangle$, $$\langle n'|n\rangle=\delta_{n,n'}$$ where $\delta_{n,n'}$ is the Kronecker Delta that is $1$ when $n=n'$ and $0$ otherwise. For an operator with a continuous spectrum of eigenvalues $p$ with eigenvectors $|p\rangle$, $$\langle p'|p\rangle=\delta(p'-p)$$ where the $\delta$ this time is the Dirac delta function. This is all just because when expressing a vector in a basis where that vector is a basis vector, it only has a component along that one vector (itself). This isn't part of the main question, but I think it relates to the general idea of what the above expressions really mean and why they are used in the way they are used. Do you happen to know why it is necessary to make the eigenfunction infinitely tall when working in the continuous spectrum? This is a poor picture of the Dirac delta function. Lets first look at the Kronecker delta. If we have this in a sum $$\sum_{n'} \delta_{n,n'}c_{n'}$$ We know this sum will just evaluate to $c_n$. The Kronecker delta "kills off" all other $c_{n'}$ terms. Now let's move to the continuous version of this example with the Dirac delta function: $$\int \delta(p-p')c(p')\text d p'=c(p)$$ Now the integral is a "continuous sum" of the terms $c(p')\delta(p-p')\text d p'$. Notice how here we have $\text d p'$ in each term. Therefore, if we want to pick out just $c(p)$ from this sum, we need $\delta(p-p')$ to be equal to $1/\text d p'$ when $p'=p$ and $0$ otherwise. Since $\text d p'$ is an infinitesimal amount, $1/\text d p'$ is an infinite amount. This is why you usually see that the Dirac delta function as an infinite spike, but saying this somewhat obscures the above analogy with the discrete case.
Information Theory in Gambling Part I: Variance and the Kelly Criterion Let’s play a game. Imagine you have $100 and you’d like to maximize your money. You flip a weighted coin that comes up heads 60% of the time, and tails 40% of the time . You can play as many times as you would like. How much would you bet? The most intuitive answer may not be the optimal one. Well let’s look at the two extremes. If you choose to bet $0, i.e. not play the game at all, you will leave with $100 with probability 1. If you bet the entire $100, you have a 60% of doubling up, and 40% of leaving with nothing. That gives you an expected value (EV) of 0.6(200) + 0.4(0) = 120. Since $120/$100 = 1.2, in the long run you would expect to make 1.2x your bet. So you might naturally think that the optimal strategy is to always bet all your money. But what if it lands on tails the first time? Then you can no longer continue to play the game since you will have no more money. In poker language, you won’t be able to realize your equity because even though in the limit of infinite wealth, you will make money, you can’t withstand the variance of any individual flip. In fact, in the limit of infinite games, you will go broke with probability 1. So it seems like the optimal betting strategy is somewhere in between the two, and should probably vary according to the size of your budget. Kelly Criterion This problem was studied by mathmetician John Kelly in 1956 while he was an associate under Claude Shannon at Bell Labs. Kelly proposed maximizing the expected (log) wealth [1]. For a binary game that offers odds (you receive times your bet if you win not including your bet) on an outcome with probability , this would be: \begin{equation} \mathbb{E}[R] = q \log(1 + xl) + (1 - q)\log (1-x) \end{equation} where is the proportion of your wealth to bet. To find which maximizes this expression, we simply take the derivative and set it equal to zero: \begin{align} \frac{d\mathbb{E}[R]}{dx} &= q\frac{d}{dx} \log(1 + xl) + (1-q)\frac{d}{dx}\log(1-x) \newline &=\frac{ql}{1+xl} - \frac{1-q}{1-x}= 0 \end{align} Rearranging to solve for : \begin{equation} x = q - \frac{1 - q}{l} \end{equation} This is commonly referred to as the Kelly bet. For our game above, plugging 0.6 for and 1 for , we get 0.6-0.4 = 0.2. If we plot our wealth as a function of our bet, we see that the optimal bet size is indeed 20% of our wealth. In limit of infinite games, the Kelly bet will yield the highest payoff than any other bet. This formula (and its variants) is widely used in Blackjack, sports betting, and financial investing to great success [4]. Information Theoretic Approach Unsurprisingly, Kelly studied this problem from the perspective of information theory and posed the question as follows: suppose you were able to communicate by phone with your future self. Naturally, being the ambitious gambler you are, you use this opportunity to get the results of events that haven’t occurred yet - making that perfect March Madness bracket, investing in bitcoin, etc. However, your telegram is noisy - it only transmits the correct signal probabilistically. How confident can you be in your bet? If you bet a fraction of your wealth, then your wealth at round is:\begin{equation}R_N = (1+x)^W(1-x)^LV_0\end{equation}where and are the number of correct and incorrect bets respectively. So as before, we want to maximize the expected wealth (which is also the maximum rate of growth) over an infinite number of games:\begin{align} \mathbb{E}[R] &= \lim_{N\to\infty}\bigg[\frac{W}{N}\log(1+x) + \frac{L}{N}\log(1-x)\bigg] \newline &= q \log (1+x) + (1-q) \log(1-x)\end{align} is the probablity receiving the correct message which makes this the same expression as the one we derived in the previous section. So another way to think about it is, assuming you bet correctly, the maximum exponential rate of growth of your capital is equal to the rate of transmission of information over your telegram. Multiple Outcome Games The Kelly Criterion generalizes similarly to games with a multiple outcomes (e.g. horse racing). Given a game with outcomes,\begin{equation}\mathbb{E}[R] = \sum_{s,r} p(s, r) \log a(s|r) + \sum_s p(s) \log\alpha_s\end{equation}where is the probability that message was sent by your future self, is the probability you bet on after receiving a message that tells you to bet on and is the odds paid for event plus your original bet (derivation can be found in [1]). We can set since we can place bets that cancel out according to . We can maximize the above expression by setting since\begin{align}a(s|r) &= \frac{p(s, r)}{\sum_k p(k, r)} \newline&= \frac{p(s,r)}{q(r)} \newline&= q(s|r)\end{align}This says that the optimal betting strategy is to bet according to only, so your best bet is actually to ignore the posted odds! If we recognize both terms in the expected wealth equation as entropy terms of our communication channel, we can derive capital as a function of the transmission rate: \begin{align} \mathbb{E}[R] &= \sum_s p(s) \log\alpha_s + \sum_{s,r} p(s, r) \log q(s|r) \newline &=H(\alpha) - H(X|Y) \end{align} which is the channel transmission rate [3]. Some Takeaways So… that’s it right? If you bet according to this formula, you should never go broke …right? Well, maybe. In Part II, we’ll see if we can design a game that will bankrupt a seemingly optimal player. But before you book your one way trip to Las Vegas, remember that the Kelly criterion makes some crucial assumptions: You can play the game an infinite amount of times. In a game where you only have a limited number of rounds, the Kelly bet is notoptimal You are betting your entire lifeline. If you can replenish your bankroll, (e.g. in a cash poker game), the most profitable strategy is to bet all your wealth. You can compute your probability of winning exactly. In practice (stock markets, etc), this is very difficult or impossible, so you’re often recommended to bet lessthan the Kelly bet. The formula does not address variance, since it only computes an expectation: in practice, you may deviate (arbitrarily far) from the expected profit. The exact coin flip game described in the first section was actually conducted in a research study [2] and perhaps unsurprisingly, most participants played far from optimal. 28% of the participants busted out, and 21% reached the maximum payout of 10x their bankroll. 67% bet tails (the less likely outcome) at some point during the experiment. The reasons for these biases have been thoroughly explored in cognitive science, psychology, and economics, which I’ll get to in Part 2. But the next time you find yourself in a casino, you’ll know how to bet to take down the house. Or at the very least you won’t end up like this guy: Thanks to Jason MacDonald for discussions and feedback. References [1] Kelly, J. L. A New Interpretation of Information Rate. 1965. [2] Haghani & Dewey. Rational Decision-Making under Uncertainty: Observed Betting Patterns on a Biased Coin.. 2016. [3] Shannon, C. E. A Mathematical Theory of Communication.. 1948. [4] Thorp E. The Kelly Criterion in Blackjack Sports Betting,and the Stock Market. 2006.
Let $\mathcal{X}= \{1,-1\}^n$, $E$ the set of edges and $J$ some real-valued matrix. Van der Waerden's theorem gives constant $c$ and set of edge weights such that $$\sum_\mathbf{x\in \mathcal{X}} \exp \sum_{ij \in E} J_{ij} x_i x_j = c \sum_{A\in C} f(A) $$ Where $C$ consists of Eulerian subgraphs over $E$, $f(A)$ is the weight of $A$ defined as the product of weights of edges in $A$ Edge weights are strictly below 1 in magnitude. Suppose only a small number of self-avoiding loops on the graph have non-negligible weight. We can approximate the sum by only considering Eulerian subgraphs including these loops. However, a small number of such loops can give a large number of Eulerian subgraphs. Is there a more efficient way?
Just to clarify notation, I'll be discussing the Gauss-Newton method for the problem $\min \phi(x)=(1/2) \| F(x) \|_{2}^{2}$ with the search direction $p$ computed as the solution to the linear system of equations $J(x^{(k)})^{T}J(x^{(k)}) p = - J(x^{(k)})^{T} F(x^{(k)})$ where $J(x)$ is the matrix of partial derivatives of components of $F(x)$ with respect to the components of $x$. In the following, I'll drop the superscript $(k)$ notation to save some typing. Assuming that $\phi(x)$ and the individual components of $F(x)$ are continuously differentiable, then by some elementary vector calculus, $\nabla \phi(x)=J(x)^{T}F(x)$ and assuming that $\phi(x)$ and the individual components of $F(x)$ are twice continuously differentiable, $\nabla^{2} \phi(x)= J(x)^{T}J(x)+ \sum_{i=1}^{m} F_{i}(x) \nabla^{2}F_{i}(x)$. From what I understand, the Gauss-Newton method is used to find a search direction, then the step size, etc., can be determined by some other method. In the simplest version of the Gauss-Newton method, there is no line search. The iteration is simply $x^{(k+1)}=x^{(k)}+p$. There is no guaranteed convergence result for this simple method, just as there is no guarantee of convergence for Newton's method with the full Hessian and unit step size. However, it does often work in practice. Adding an approximate line search to the algorithm improves its convergence properties. If a safeguarded approximate line search is used, and if the individual $F_{i}(x)$ function have Lipschitz continuous gradients, and $J(x)^{T}J(x)$ is always nonsingular, then convergence to a stationary point can be guaranteed. See the Nocedal and Wright book cited at the end of this answer. The Gauss-Newton method always results in a direction of strict descent. This is basically true. If the matrix $J(x)^{T}J(x)$ in the Gauss-Newton method is non-singular, then it is symmetric and positive definite and $(J(x)^{T}J(x))^{-1}$ is also symmetric and positive definite. The GN direction is $p=-(J(x)^{T}J(x))^{-1}J(x)^{T}F(x)$ and assuming that $J(x)^{T}F(x) \neq 0$, the directional derivative in the direction $p$ is $\nabla \phi(x)^{T}p=-(J(x)^{T}F(x))^{T} \; (J(x)^{T}J(x))^{-1}\; (J(x)^{T}F(x)) < 0$. If $J(x)^{T}F(x)=0$, then you're at a stationary point (and the GN method would stop.) If $J(x)^{T}J(x)$ is singular, then the Gauss-Newton step is not well defined. Actually, because of the structure of the system of equations, there will be infinitely many directions $p$ that satisfy the equations. If $p$ is chosen arbitrarily from among these directions, the method can fail to converge. the Gauss-Newton method only requires that the directional derivative of the objective function exist – but you do not need to compute it. This question is unclear. Directional derivative of what function? In what direction? However, if what you mean by "directional derivative" is "the gradient of $\phi(x)$", then $\nabla \phi(x)=J(x)^{T}F(x)$, and this is computed during each iteration of the Gauss-Newton method. If you want the directional derivative of $\phi(x)$ in some particular direction, you can obtain it by taking the dot product of $\nabla \phi(x)$ and that direction. the Gauss-Newton method does not require the Hessian. Undoubtedly true. The matrix $J(x)^{T}J(x)$ used in the GN method is not the Hessian of $\phi(x)$. When you’re faced with an optimization problem of the form min||F(x)||, and F(x) is non-linear, then Gauss-Newton is a good choice because it doesn’t require you to compute the directional derivative. The Gauss-Newton method works well in practice on many problems, but it can fail when $J(x)^{T}J(x)$ is singular or nearly singular. Stabilization of the Gauss-Newton method is advisable for a more robust algorithm. The Levenberg-Marquardt algorithm is an example of a stabilized version of Gauss-Newton. Again, it's not clear what you mean by "directional derivative" (of what function in what direction?) If you simply mean the gradient of $\phi(x)$, then you do compute $\nabla \phi(x)$ in the Gauss-Newton method as discussed above. In any case, this isn't a very good "because" reason. This material is discussed in many textbooks on nonlinear optimization and nonlinear regression. An authoritative source is Nocedal and Wright, Numerical Optimization, 2nd ed. 2006.
CryptoDB Paper: Explicit Construction of Secure Frameproof Codes Authors: Dongvu Tonien Reihaneh Safavi-Naini Download: URL: http://eprint.iacr.org/2005/275 Search ePrint Search Google Abstract: $\Gamma$ is a $q$-ary code of length $L$. A word $w$ is called a descendant of a coalition of codewords $w^{(1)}, w^{(2)}, \dots, w^{(t)}$ of $\Gamma$ if at each position $i$, $1 \leq i \leq L$, $w$ inherits a symbol from one of its parents, that is $w_i \in \{ w^{(1)}_i, w^{(2)}_i, \dots, w^{(t)}_i \}$. A $k$-secure frameproof code ($k$-SFPC) ensures that any two disjoint coalitions of size at most $k$ have no common descendant. Several probabilistic methods prove the existance of codes but there are not many explicit constructions. Indeed, it is an open problem in [J. Staddon et al.,IEEE Trans. on Information Theory, 47 (2001), pp. 1042--1049] to construct explicitly $q$-ary 2-secure frameproof code for arbitrary $q$.In this paper, we present several explicit constructions of $q$-ary 2-SFPCs. These constructions are generalisation of the binary inner code of the secure code in [V.D. To et al., Proceeding of IndoCrypt'02, LNCS 2551, pp. 149--162, 2002]. The length of our new code is logarithmically small compared to its size. BibTeX @misc{eprint-2005-12609, title={Explicit Construction of Secure Frameproof Codes}, booktitle={IACR Eprint archive}, keywords={combinatorial cryptography, fingerprinting codes, secure frameproof codes, traitor tracing}, url={http://eprint.iacr.org/2005/275}, note={International Journal of Pure and Applied Mathematics, Volume 6, No. 3, 2003, 343-360 [email protected] 13012 received 16 Aug 2005, last revised 17 Aug 2005}, author={Dongvu Tonien and Reihaneh Safavi-Naini}, year=2005 }
Deelan Cunden, Fabio, Mezzadri, Francesco, O'Connell, Neil and Simm, Nick (2019) Moments of random matrices and hypergeometric orthogonal polynomials. Communications in Mathematical Physics. ISSN 0010-3616 PDF - Published Version Available under License Creative Commons Attribution. Download (955kB) PDF - Accepted Version Restricted to SRO admin only until 6 February 2020. Download (752kB) Abstract We establish a new connection between moments of $n \times n$ random matrices $X_n$ and hypergeometric orthogonal polynomials. Specifically, we consider moments $\mathbb{E}\Tr X_n^{-s}$ as a function of the complex variable $s \in \mathbb{C}$, whose analytic structure we describe completely. We discover several remarkable features, including a reflection symmetry (or functional equation), zeros on a critical line in the complex plane, and orthogonality relations. An application of the theory resolves part of an integrality conjecture of Cunden \textit{et al.}~[F. D. Cunden, F. Mezzadri, N. J. Simm and P. Vivo, J. Math. Phys. 57 (2016)] on the time-delay matrix of chaotic cavities. In each of the classical ensembles of random matrix theory (Gaussian, Laguerre, Jacobi) we characterise the moments in terms of the Askey scheme of hypergeometric orthogonal polynomials. We also calculate the leading order $n\to\infty$ asymptotics of the moments and discuss their symmetries and zeroes. We discuss aspects of these phenomena beyond the random matrix setting, including the Mellin transform of products and Wronskians of pairs of classical orthogonal polynomials. When the random matrix model has orthogonal or symplectic symmetry, we obtain a new duality formula relating their moments to hypergeometric orthogonal polynomials. Item Type: Article Keywords: Random matrix theory, Gaussian unitary ensemble, Meixner polynomials, hypergeometric functions, Jacobi polynomials, orthogonal polynomials, zeta functions Schools and Departments: School of Mathematical and Physical Sciences > Mathematics Research Centres and Groups: Probability and Statistics Research Group Subjects: Q Science > QA Mathematics Q Science > QA Mathematics > QA0273 Probabilities. Mathematical statistics Q Science > QC Physics Depositing User: Nicholas Simm Date Deposited: 19 Nov 2018 17:01 Last Modified: 01 Jul 2019 17:00 URI: http://sro.sussex.ac.uk/id/eprint/80246 📧Request an update
Let's suppose that we have a polynomial time randomized algorithm $A$ for a decision problem $\Pi$ with the property that$$\mathrm{Pr}[A \text{ is right}] > 1/2$$for all inputs. (In particular, independent of the input length $n$.) We can use Chernoff bounds to show something stronger, namely that for any fixed $c$, there is a polynomial time, randomized algorithm for $\Pi$ that is correct with probability at least $1-n^{-c}$. The algorithm is very simple: On an input of length $n$, we run $A$ a polynomial number of times $N$ (to be determined below) and then report the majority answer. Call this algorithm $A'$. What we need is a bound on $$\mathrm{Pr}[A' \text{ is wrong}]$$in terms of the number of repetitions $N$. The hypothesis about $A$ implies that there is an $\epsilon > 0$ such that $$\mathrm{Pr}[A \text{ is right}] \ge 1/2 + \epsilon$$Since the $N$ runs of $A$ are independent, for any $k\in [N]$ we have$$\mathrm{Pr}[k \text{ of } N \text{ runs of } A \text{ are right}]\ge Pr[X = k]$$where $X$ is a binomial random variable with parameters $N$ and $1/2+\epsilon$. Since the algorithm $A'$ is wrong exactly when at most $N/2$ of the runs of $A$ are right, we obtain an upper bound on the failure probability by a "tail estimate" bounding $$\mathrm{Pr}[X \le N/2]$$This is a prototypical time to use a Chernoff bound, since a binomial r.v. is the sum of $N$ independent random variables. One instance of the Chernoff bound says that if $Y$ is the sum of $N$ independent r.v.'s with support in $[0,1]$ and $t > 0$, then $$\mathrm{Pr}[Y < E[Y] - t] \le e^{-2t^2/N}$$Thus, since $E[X] = N/2 + N\epsilon$, we take $t = N\epsilon$ to obtain $$\mathrm{Pr}[A' \text{ is wrong}]\le e^{-2\epsilon^2 N}$$So we can take $N = (1/2)\epsilon^{-2}c\log n$, and we get the claimed error probability for $A'$.
If the utility is continuous and locally insatiable then the Hicksian demand equals the Walrasian demand.So you'll need to look for a utility function which violates atleast one of these.Lets try a simple violation of non-satiation $$u(x,y) = \left\{\begin{array}{ll}x+y & \quad x+y \leq 1 \\1 & \quad \text{ ... Convexity can be a very important assumption as much of economic analysis is built on working with convex sets, which makes things easier.Very importantly, convexity of sets allows us to work with the Separating Hyperplane and Supporting Hyperplane theorems, which have applications to many results in economics, as partially discussed here and here.Here ... That statement is not generally true.It is true of rivalrous goods (e.g. when I take a slice of a pizza, there's that much less left of the pizza for you).It is false of non-rivalrous goods (e.g. when I step into the sunshine, there is usually* no less of the sunshine for you).*Unless of course I somehow also block the sunshine for you. Without convexity, we'd lose the convenience of using the first order condition (or the tangency condition) to identify the optimal consumption bundle.As shown in the following figure, where the red-shaded region is a non-convex budget set, the tangency at point $E$ is not the optimal consumption point; rather, the optimum is a corner solution, where only $... As we have discussed in the comments, the utility function used in the book is Cobb-Douglas: $U = CM^{\mu}A^{1 - \mu}$. The well known fact mentioned in equation (3.3) on page 57 - the one that you highlighted - is that the Cobb-Douglas utility function is homothetic - the share of total income spent on any of the goods is constant over all possible incomes. ... The following is a slight rewording of an example in Richard Thaler's Misbehaving (1):Positive framing of certain outcomeImagine you are $300 richer than you are today. You are given a choice between:A. A certain gain of \$100.B. A 50% chance to gain \$200 and a 50% chance of losing \$0.Negative ... Why are averages preferred to extremes on the same indifference curve?That is false. Whoever told you this is mistaken.Or it could also be that you misheard/misread and are confusing this with the idea that the bundle of 100 apples + 100 oranges is (usually) preferred to the bundle of 200 apples or the bundle of 200 oranges.Doesn't everything along ... The MRS represents the rate of exchange between goods x and y that would leave the agent indifferent to trading the two. It is the slope of the indifference curve for a given value of utility U.To derive the indifference map (the set of indifference curves), take U constant in the utility function, and solve for y in terms of x. You can check if your MRS ... I believe this comes from a very beginning of the book, which tries to emphasize the concept of scarcity... a very important idea in economics.Later on, you'll see that there are certain types of goods that, when one person "consumes" it, it doesn't mean that other people cannot consume it as well. In economics, this is called a non-rivalry good. Some ... From the Oxford dictionary: (emphasis added by me)externality (noun)A consequence of an industrial or commercial activity which affects other parties without this being reflected in market prices...Another way to put this is that externalities are not the results of market rivalry. So if you and I are bidding on the same item on e-Bay, and you ... The problem formulation admits the following Normal Form representation. We can reject any strategy involving price greater than 2, as demand falls to zero and such strategies are strictly dominated by those for which prices are either 1 or 2.0 1 20 [0,0] [0,0] [0,0]1 [0,0] [0.5,0.5] [1,0]2 [0,0] [0,1] [1,1]... As mentioned by Mas-Colell, Whinston and Green, the equality is true "for all $p$ and $w$". It is a consequence of the budget constraint, which is satisfied for any prices and income values:$$\Sigma_{k=1}^{L} p_k x_k (p,w) = w.$$As you mention, the equality$$ \Sigma_{k=1}^{L} \frac{\partial x_l (p,w)}{\partial p_k} p_k + \frac{\partial x_l (p,w)}{\... At the level of individuals (and in the differentiable case), the first order derivatives of the demand system are related to the second order derivatives of the utility function. This implies that the second order derivatives of the demand system are related to the third order derivatives of the utility function. \Indeed, from the first order condition ... Let $q$ denote the output of the firm, and let $\varepsilon_p(q)$ denote the elasticity of price w.r.t. quantity sold. We know that when profit is maximized$$|\varepsilon_p(q)| = \frac{p(q)-MC(q)}{p(q)}.$$We also know that in the long-run equilibrium firms have zero economic profit, i.e.$$AC(q) = p(q).$$Productive efficiency is achieved when $AC(q)$ ... Common pool resources can be depleted by overuse.For example, an area of communal grazing land would be a common pool resource because grazing too many animals decreases the amount of grass available to each animal.Wikipedia is a common good but are not a common pool resource - reading a lot of articles on Wikipedia does not reduce the number of ... To start with, plot the line for which \begin{equation}x + 3y = U\end{equation}The red line in the graph above is this line. For simplicity, I've assumed U = 9 in this case but the solution would work for any U > 0.The region to the right of this line would satisfy your constraint but since we are interested in minimising our cost, we would want our ... Expenditure minimization problem in the question is as follows :\begin{eqnarray*} \min_{x\geq 0, y\geq 0} & \ \ p_Xx + p_Yy \\ \text{s.t.}& \ \ x + 3y \geq U \end{eqnarray*}where $p_X > 0$, $p_Y > 0$ and $U \geq 0$ are given.Since prices are positive, the cost minimizing choice will satisfy the condition that $x + 3y = U$. So, we can rewrite ...
Let's say I want to use a Gaussian copula $$C_{R_t}(\eta_1, ..., \eta_n) = N_{R_t}(N^{-1}(\eta_1), ...,N^{-1}(\eta_n))$$ with a time-varying correlation matrix $R_t$. Through DCC we model the correlation which evolves as $$Q_t=(1-\alpha-\beta)\overline{Q}+\beta Q_{t-1} + \alpha \epsilon^{*}_{t-1} \epsilon^{*T}_{t-1}$$ where $\epsilon_t^*=N^{-1}(\eta_t)$. The components $r_{ij,t}$ of $R_t$ is then obtained after normalization $$r_{ij,t}=\frac{q_{ij, t}}{\sqrt{q_{ii,t}q_{jj,t}}}$$ (see http://www.frbsf.org/economic-research/files/christoffersen.pdf) Say that, instead of using some built-in commands in R (such as cgarchspec and cgarchfit from rmgarch package), I'd want to write a function in R to do this. Specifically, I want R to i) compute $Q_t$, which is measurable at time $t-1$ given previous initial values; ii) convert it in $R_t$ as defined above; iii) and use it as input to generate $N$ random draws from the Gaussian copula at time $t$. Next, I would like R to repeat steps i-iii for $t+1$ using values estimated in $t$, in $t+2$ using $t+1$ data, and so on until $t+d$. How would something like this look like? Anyone has a hint, or any link that would introduce me to such multi-level equation? I beg you to pardon my R ignorance but I've never come across such an exercise during my studies and now I'm totally lost. Moreover, I'd like to implement something similar using other copula families, that's why I want to get the basics writing down the equation for the Gaussian one rather than just using the built-in commands. Any hint is appreciated, thank you in advance!
Here is the answer I had made over on MathOverflow: Surprisingly, the answer is yes! Well, let me say that the answeris yes for what I find to be a reasonable way to understand whatyou've asked. Specifically, what I claim is that if PA is consistent, then thereis a consistent theory $T$ in the language of arithmetic with thefollowing properties: The axioms of $T$ are definable in the language of arithmetic. PA proves, of every particular axiom of PA, that it satisfies the defining property of $T$, and so$T$ extends PA. $T$ proves that the set of axioms satisfying that definition forms a consistent theory. In other words, $T$ proves that $T$ is consistent. In this sense, the theory $T$ is a positive instance of what yourequest. But actually, a bit more is true about the theory $T$ I have inmind, and it may lead you to think a little about what exactly youwant. Actually, PA proves that $T$ is consistent. Furthermore, the theory $T$ has exactly the same axioms as PA. I believe that this was observed first by Feferman, and probablysomeone can provide the best reference for this. The idea of the proof is simple. We shall simply describe theaxioms of PA in a different way, rather than enumerating them inthe usual way. Specifically, let $T$ consist of the usual axiomsof PA, added one at a time, except that we add the next axiom onlyso long as the resulting theory remains consistent. Since we assumed that PA is consistent, it follows that actuallyall the axioms of PA will actually satisfy the defining propertyof $T$, and so PA will be contained in $T$. Furthermore, since PAproves of any particular finite number of axioms of PA that theyare consistent, it follows that PA proves that any particularaxiom of PA will be in $T$. Because of how we defined it, however, it is clear that PA andhence also $T$ proves that $T$ is consistent, since if it weren't,there would be a first stage where the inconsistency arises, andthen we wouldn't have added the axiom making it inconsistent.Almost by definition, $T$ is consistent, and PA can prove that. So$T$ proves that $T$, as defined by the definition we gave for it,is consistent. So this theory $T$ actually proves its own consistency! Meanwhile, let me point out that if one makes slightly strongerrequirements on what is wanted, then the question has a negativeanswer, essentially by the usual proof of the second incompleteness theorem: Theorem. Suppose that $T$ is a arithmetically definabletheory extending PA, such that if $\sigma$ is an axiom of $T$,then $T$ proves that $\sigma$ is an axiom of $T$ and furthermorePA proves these things about $T$. If $T$ is consistent, then it does not prove its own consistency. Proof. By the Gödel fixed-point lemma, let $\psi$ be asentence for which PA proves $\psi\leftrightarrow\ \not\vdash_T\psi$. Thus, PAproves that $\psi$ asserts its own non-provability in $T$. I claim, first, that $T$ does not prove $\psi$, since if it did,then since $T$ proves that its actual axioms are indeed axioms, it follows that $T$ would prove that that proof is indeed a proof, and so $T$ would prove that $\psi$ is provable in $T$, a statement which PA and hence $T$ proves is equivalent to $\neg\psi$, and so $T$ would also prove $\neg\psi$, contrary toconsistency. So $T$ does not prove $\psi$. And this is preciselywhat $\psi$ asserts, so $\psi$ is true. In the previous paragraph, we argued that if $T$ is consistent,then $\psi$ is true. By formalizing that argument inarithmetic, then since we assumed that PA proved our hypotheses on $T$, we see that PA proves that $\text{Con}(T)\to\psi$. Soif $T$ were to prove $\text{Con}(T)$, then it would prove $\psi$,contradicting our earlier observation. So $T$ does not prove$\text{Con}(T)$. QED
The main aim of this paper is to present a Stochastic Finite Element Method analysis with reference to principal design parameters of bridges for pedestrians: eigenfrequency and deflection of bridge span. They are considered with respect to random thickness of plates in boxed-section bridge platform, Young modulus of structural steel and static load resulting from crowd of pedestrians. The influence of the quality of the numerical model in the context of traditional FEM is shown also on the example of a simple steel shield. Steel structures with random parameters are discretized in exactly the same way as for the needs of traditional Finite Element Method. Its probabilistic version is provided thanks to the Response Function Method, where several numerical tests with random parameter values varying around its mean value enable the determination of the structural response and, thanks to the Least Squares Method, its final probabilistic moments. In this paper a technique has been developed to determine constant parameters of copper as a power-law hardening material by tensile test approach. A work-hardening process is used to describe the increase of the stress level necessary to continue plastic deformation. A computer program is used to show the variation of the stress-strain relation for different values of stress hardening exponent, n and power-law hardening constant, α . Due to its close tolerances, excellent corrosion resistance and high material strength, in this analysis copper (Cu) has been selected as the material. As a power-law hardening material, Cu has been used to compute stress hardening exponent, n and power-law hardening constant, α from tensile test experiment without heat treatment and after heat treatment. A wealth of information about mechanical behavior of a material can be determined by conducting a simple tensile test in which a cylindrical specimen of a uniform cross-section is pulled until it ruptures or fractures into separate pieces. The original cross sectional area and gauge length are measured prior to conducting the test and the applied load and gauge deformation are continuously measured throughout the test. Based on the initial geometry of the sample, the engineering stress-strain behavior (stress-strain curve) can be easily generated from which numerous mechanical properties, such as the yield strength and elastic modulus, can be determined. A universal testing machine is utilized to apply the load in a continuously increasing (ramp) manner according to ASTM specifications. Finally, theoretical results are compared with these obtained from experiments where the nature of curves is found similar to each other. It is observed that there is a significant change of the value of n obtained with and without heat treatment it means the value of n should be determined for the heat treated condition of copper material for their applications in engineering fields. Structure Damping Section, pp.48-87.Sasikumar K.S.K., Selvakumar S. and Arulshri K.P. (2011): An analysis of the effect of constraining layer modulus on the vibration control of beams treated with PCLD. - European Journal of Scientific Research, vol.66, No.3, pp.377-391.Yan M.J. and Dowell E.H. (1972): Governing equations for vibrating constrained layer damping of sandwich beams and plates. - J. Appl. Mech. Trans. ASME, vol.94, No., pp.1041-1047. Salima Sadat, Allel Mokaddem, Bendouma Doumi, Mohamed Berber and Ahmed Boutaous good mechanical properties at break and must first be plasticized or formulated with different additives. Starchy materials can then be implemented by casting or extrusion [ 5 , 6 ]. For this reason, we thought of strengthening a Starch matrix by two natural fibers–Hemp and Sisal.Sisal is a perennial plant consisting of a rosette of large leaves with triangular section up to 2 m long. It is a tropical plant and each plant can produce 180 to 240 leaves depending on the geographical situation, altitude, rainfall and variety considered. Sisal can be harvested in 2 Dmitry Popolov, Sergey Shved, Igor Zaselskiy and Igor Pelykh point for a cantilever beam of rectangular cross-section under the action of a static transverse bending force in accordance with [ 7 ] has the form of:f C x = − P x max L 3 3 E I y = − P x max 4 L 3 E H h 3 f C y = − P y max L 3 3 E I x = − P y max 4 L 3 E H 3 h ,$$\begin{array}{}\displaystyle\left\{\begin{array}{}f_{Cx}=-P_x~\max\frac{L^3}{3EI_y}=-P_x~\max\frac{4L^3}{EHh^3}\\f_{Cy}=-P_y~\max\frac{L^3}{3EI_x}=-P_y~\max\frac{4L^3}{EH^3h},\end{array}\right.\end{array}$$ (2)where E is the Young’s modulus for the material of the elastic element the global behavior of the part may not be easily determined.Different ways of improving the mechanical behavior of engineering parts are used in practice. One of them goes through selecting a material having higher Young’s modulus value. The other is to modify the inertia moment of a considered part [ 1 , 2 ]. This can be achieved by inserting ribs in the longitudinal and/or transversal directions and/or webbing. If reinforcements are oriented in the normal direction as of the longitudinal ( z -axis) direction, it appears to be effective under bending loads, as Amit K. Thawait, Lakshman Sondhi, Shubhashis Sanyal and Shubhankar Bhowmick function, exponential function and Mori–Tanaka scheme. These distributions are implemented in the FEM using element based material grading. A finite element formulation for the problem is reported, which is based on the principle of stationary total potential. Disks are subjected to centrifugal body load and have clamped-free boundary condition. The work aims to investigate the effect of grading parameter “ n ” on the deformation and stresses for different material gradation law.2Problem FormulationIn this section, geometric equations as well as different drill bit.Figure 3von Mises stress at mid-section of drill bit.Figure 4von Mises stress at entry without drill bit.Figure 5von Mises stress at completion of drilling.2.2Computational challengesThe simulation that ran on an Intel second-generation mobile processor took approximately 2 days to complete. Subsequent models were created with only change in drill bit diameter. The simulation in whole with consideration of subsequent models took approximately around 8 to 16 days. The stable time increment is S. Sai Venkatesh, T. A. Ram Kumar, A. P. Blalakumhren, M. Saimurugan and K. Prakash Marimuthu ( y )0.0125 mArea moment of inertia of section ( I ) is given by ( bh 3 /12)3.25521 × 10 -8 m 4Sectionmodulus ( Z ) is given by ( I/y )2.60417 × 10 –6 m 3Young’s modulus of tool holder (E)2.05 × 10 11 N/m 2Distance from tool tip to center of strain gauge0.045 mWork piece diameter0.03 mThe cutting force in turning operation is determined using a strain gauge. Here, the strain gauges are connected in half-bridge configuration type II. This configuration was specifically designed for measuring bending specimens of type IV ( Figure 2 ): thickness T = 6 mm, width of narrow section W c = 6 mm, length of narrow section L = 33 mm, width overall W o = 19 mm, length overall L o = 100 mm, gage length G = 25 mm, distance between grips D = 65 mm, outer radius R o = 25 mm, and radius of fillet R = 14 mm. The mechanical tests were carried out with a Zwick/Roell-type machine with a capacity of 20 kN [ 9 ].Figure 2Specimens of uniaxial tensile test UTAll dimensions of the specimens are taken according to ASTM standard D638-03 [ 10 ]. The
I would like to numerically solve the following system of coupled nonlinear differential equations: $$ -\frac{\hbar^2}{2m_a} \frac{\partial^2}{\partial x^2}\psi_a + V_{ext}\psi_a + \left( g_a |\psi_a|^2 + g_{ab} |\psi_b|^2 \right)\psi_a = \mu_a \psi_a $$ $$ -\frac{\hbar^2}{2m_b} \frac{\partial^2}{\partial x^2}\psi_b + V_{ext}\psi_b + \left( g_b |\psi_b|^2 + g_{ab} |\psi_a|^2 \right)\psi_b=\mu_b\psi_b $$ where $\hbar$, $m_a$, $m_b$, $g_a$, $g_b$, $g_{ab}$ are known coefficients and $V_{ext}$ is a known function of $x$, i.e.: $$ V_{ext}= -P \left[\cos\left(\frac{3}{2}\, \frac{x}{L}\, 2\pi \right)\right]^2 $$ The unknowns are eigenfunctions $\psi_a(x)$, $\psi_b(x)$ and the eigenvalues $\mu_a$ and $\mu_b$. Both $V_{ext}$, $\psi_a$ and $\psi_b$ are defined on the domain $x\in[0,L]$. Functions $\psi_a(x)$ and $\psi_b(x)$ are complex functions. The boundary conditions are the periodic ones, i.e.: $$ \psi_a(x+L)=\psi_a(x) \qquad \psi_b(x+L)=\psi_b(x) $$ Notice that the period of the eigenfunctions should be $L$ while the period of the external potential is $L/3$. Eventually, there is a constraint on the norms of $\psi_a$ and $\psi_b$, namely: $$ \int_0^L |\psi_a|^2 \, \mathrm{d}x= N, \qquad \int_0^L |\psi_b|^2 \, \mathrm{d}x= M $$ Can you please give me a good strategy to handle this problem and, possibly, a Matlab code?
From Paul Romer's extended mathiness appendix [pdf]. I apparently completely misunderstood Paul Romer's comment about "mathiness". My original interpretation was that Romer was upset about obfuscating political assumptions that aren't substantiated empirically by using fancy math. But then I started reading some of his appendix (pictured above). I was completely wrong. Paul Romer is upset about the technical rigorof those political assumptions. If the function Lucas and Moll (2014) used allowed you to exchange the order of the limits everything apparently would have been fine! Unfortunately, it turns out that Romer's takedown of Lucas and Moll is the true mathiness. Lucas and Moll (LM) put forward some economic growth function $g_{\beta}(t)$ where $\beta$ is the "rate of innovation". LM then tell us: \lim_{t \rightarrow \infty} \; g_{\beta}(t) = \text{ 2%} $$ which is independent of $\beta$. Fine. But Romer objects! He shows (somewhere -- couldn't find the comment on LM he was referring to on his website) \lim_{\beta \rightarrow 0} \; g_{\beta}(T) = 0 $$ Oh noes! We have a case where the limit depends on the order you take it: $$ \lim_{\beta \rightarrow 0} \; \lim_{T \rightarrow \infty} \; g_{\beta}(T) \neq \lim_{T \rightarrow \infty} \; \lim_{\beta \rightarrow 0} \; g_{\beta}(T) $$ He even goes on to state this in a mathy "proposition" form (pictured above). $$ T \gg 1/\beta $$ which is situation 2 above. \lim_{\beta \rightarrow 0} \; \lim_{T \rightarrow \infty} \; g_{\beta}(T) \neq \lim_{T \rightarrow \infty} \; \lim_{\beta \rightarrow 0} \; g_{\beta}(T) $$ He even goes on to state this in a mathy "proposition" form (pictured above). Now if $g$ represented the average polarization of an Ising model, this might be interesting (even lending itself to the possibility of a topological solution). But infinity is not readily encountered in a real economy and the situation being described is the idea observing the economy at a date $T$ when the typical time until "knowledge arrives" ( I know!) is $1/\beta$. What do these two limits mean in real life? You are observing an economy in which knowledge never arrives ($\beta \rightarrow 0$ first) You are observing an economy after which the knowledge has arrived ($T \rightarrow \infty$ first) The second one is the sensible limit of Lucas and Moll (2014). Now what does the time of observation $T$ mean in real life? I am assuming economists don't really pay attention to the big bang or eventual heat death of the universe and that an economy can happen at any time in the lifetime of the universe. That is to say -- economics has a time translation invariance where you could relabel the year 1991 as 10191 and it wouldn't make a bit of difference. Relative times matter, but not absolute times. Which means that I could shift $T$ to any finite, but large value arbitrarily ... in particular I can choose $T \gg 1/\beta$. I can't choose $\beta$ as it represents a real thing: the time between two "knowledge" events. That is to say there is an existing scale in the model $1/\beta$, the time difference between "knowledge" events. There is no such scale for $T$ unless it is $1/\beta$ and therefore the only sensible limit is: T \gg 1/\beta $$ which is situation 2 above. Romer is turns out to be ironically wrong because he uses too mathematical a description in a physical theory. You can discard the second first limit as a mathematical curiosity that doesn't represent a real life economy. I don't have any particular fondness for Lucas, but I think in his zeal to take him down Romer falls for the exact mathiness he purports to dislike! I later discussed this more generally. Economists do not appear to understand the physical meaning behind limits, using them only as mathematical procedures. Update 9 April 2017 I later discussed this more generally. Economists do not appear to understand the physical meaning behind limits, using them only as mathematical procedures.
Distances We have the invariant distance equation for a homogeneous and isotropic universe (an FRW spacetime): \[ ds^2 = -c^2 dt^2 + a^2(t)\left[\frac{dr^2}{1-kr^2} +r^2\left(d\theta^2 + \sin^2(\theta) d\phi^2\right)\right]. \] Here we introduce several distance definitions, and how they are related to the coordinate system that leads to the above invariant distance expresson. Luminosity distance: By definition of luminosity distance \(d_L\), \[ F = \frac{L}{4\pi d_L^2} \] which is the relationship we expect in a Euclidean geometry with no expansion, assuming an isotropic emitter. We also calculated the relationship between flux and luminosity in an FRW spacetime and found \[ F = \frac{L}{4\pi r^2 (1+z)^2} \] so we conclude that in an FRW spacetime, \(d_L = r(1+z)\). Due to how apparent magnitude \(m\), and absolute magnitude \(M\) are defined, we have \[ \mu \equiv m-M = 5 \log_{10}\left(\frac{d_L}{10\ {\rm pc}}\right) \] where \(\mu\) is called the distance modulus. If you know the flux of an object (how bright it is) then you can determine apparent magnitude \(m\). If you know the luminosity of an object (its total power output), then Angular diameter distance: By definition of angular-diameter distance, \(d_A\), \[ \ell = \theta d_A \] where \(\theta\) is the angle subtended by an arc of a circle with length \(\ell\), as it would be measured with measuring tape. By the angle subtended, we mean the angle between two light rays, one coming from one end of the arc, and the other from the other end of the arc. If we place ourselves in the center of the coordinate system we can work out what this means in terms of coordinates. Place the observer at the spatial origin \(r=0\) and at time equals today. Place one end of the arc at \(r=d, \theta = 0, \phi = 0 \) and the other at \(r=d, \theta = \alpha ,\ \phi = 0\). Light will travel from both of these points to the origin along purely radial paths; i.e., with no change in \( \theta \ {\rm or}\ \phi\). So the angle they subtend upon arrival is \(\alpha\). We can use the invariant distance expression to work out that \( \ell = a \alpha d\) where \(a\) is the scale factor at the time the light we are receiving today is emitted from the object. Thus \(d_A = \ell/\theta = a d \alpha/\alpha = ad\) where \(d\) is the radial coordinate separation between the object and the observer. Comoving angular diameter distance: This is simply the angular diameter distance divided by the scale factor. We will reserve \(D_A\) for comoving angular diameter distance. The comoving angular diameter distance between \(r=0\) and \(r=d\) is \(D_A = d\). Box \(\PageIndex{1}\) Exercise 16.1.1: In an FRW spacetime, how are \(D_A\) and \(d_L\) related? Answer \(d_L = d_A (1+z)\) Box \(\PageIndex{2}\) Exercise 16.2.1: What is the comoving distance, \(\ell\), from the origin to some point with radial coordinate value \(r\), along a path of constant \(\theta\) and \(\phi\)? Answer The path is at constant \(\theta\) and \(\phi\) so only \(r\) is varying. Therefore \(\sqrt{ds^2} = a(t) dr/\sqrt{1-kr^2}\). The comoving length is given by integrating this up, while setting the scale factor equal to unity so \(\ell = \int_0^r dr/\sqrt{1-kr^2}\). For \(kr^2 << 1\), \(\ell \simeq \int_0^r dr(1+k r^2/2) = r + kr^3/6\). Curvature integrals: Although we've made use of a first order Taylor expansion to analytically solve the above integral, the exact integral does have an analytic solution. For \(k> 0\), \(\ell = (1/\sqrt{k}) \sin^{-1}(\sqrt{k}r)\). For \(k< 0\), \(\ell = (1/\sqrt{-k}) \sinh^{-1}(\sqrt{-k}r)\). To work out how the comoving angular diameter distance \(D_A\) is related to the scale factor at the time light was emitted, \(a\), we look at how light travels from coordinate value \(r\) to the origin. Light has \(ds^2 = 0\), and from that we get \[\begin{equation} \begin{aligned} \int_0^r \frac{dr}{\sqrt{1-kr^2}} & = \int_t^{t_0} cdt/a = c \int_a^1 da/(a^2 H), \\ \\ (1/\sqrt{-k}) \sinh^{-1}(\sqrt{-k} r) & = c \int_a^1 da/(a^2 H) \\ \\ r & = (1/\sqrt{-k}) \sinh\left(\sqrt{-k} c\int_a^1 da/(a^2 H)\right) \\ \\ D_A & = (1/\sqrt{-k}) \sinh\left(\sqrt{-k} c\int_a^1 da/(a^2 H)\right) \end{aligned} \end{equation}\] where, except for the first line, we have assumed \(k < 0\). I leave it to the student to work out the \(k > 0\) case. The \(k=0\) case should also be clear. Box \(\PageIndex{3}\) Exercise 16.3.1: In calculating \(D_A\) vs. \(a\), what are the two different ways curvature makes a difference? Answer Curvature affects the history of the expansion rate, \(H(a)\), via the Friedmann equation. It also affects the time a photon has to travel to come from coordinate distance \(r\) due to the \(dr^2/(1-kr^2)\) term in the invariant distance equation. Box \(\PageIndex{4}\) We have defined the density parameters \(\Omega_x = \rho_{x,0}/\rho_{c,0}\) where \(\rho_c \equiv 3H^2/(8\pi G)\) is the critical density, defined to be the total density for which the curvature, \(k\) is zero. Exercise 16.4.1: Using the Friedmann equation, convince yourself that if \(\rho = \rho_c\), then \(k=0\). With this notation we can write \[ H^2(a) = H_0^2\left(\Omega_\Lambda + \Omega_m a^{-3} + \Omega_K a^{-2}\right) \] where \(\Omega_K \equiv -k/(H_0^2)\). Answer I have not produced a written solution for this yet. Exercise 16.4.2: Show that this can be derived from the Friedmann equation and the fact that \(\rho_\Lambda \propto a^0\) and \(\rho_m \propto a^{-3}\). Answer I have not produced a written solution for this yet. Exercise 16.4.3: Further, show that \(\Omega_\Lambda + \Omega_m + \Omega_K = 1\). Answer I have not produced a written solution for this yet. Some Definitions of Astronomical Terms Magnitudes are absurd but useful if you want to use data from astronomers. Luminosity: The luminosity of an object, \(L\), is its power output. Usually its total electromagnetic power output, sometimes referred to as bolometric luminosity. Typical units for luminosity are ergs/sec (\(10^7\) erg = 1 Joule, 1 Watt = 1 Joule/sec) or solar luminosity, \(L_{\rm Sun}\). The Sun, by definition has a luminosity of one solar luminosity and \(L_{\rm Sun} = 3.826 \times 10^{33}\) erg/sec = more than \(10^{24}\) 100 Watt light bulbs. (You can remember this if you remember it's about as luminous as 7 Avogadro's number of 100 Watt light bulbs). Flux: The flux, \(F\), from an object is not an intrinsic property of the object, but also depends on the distance to the object. It is the amount of energy passing through a unit area, per unit of time. For an isotropic emitter in a non-expanding, Euclidean three-dimensional space, \(F = L/(4 \pi d^2)\) where \(d\) is the distance between source and observer. This equation just follows from energy conservation; note that the total power flowing through a spherical shell of radius \(d\) completely surrounding the emitter at its center is \(4 \pi d^2 \times F = L\). Luminosity Distance: The universe is expanding, and the spatial geometry is non-Euclidean to some degree. We define luminosity distance such that the Euclidean, non-expanding relationship between flux, luminosity, and distance is preserved if one simply swaps out distance for luminosity distance. That is, \[F = \frac{L}{4\pi d_L^2}.\] We showed previously that in an FRW universe \(d_L = d\times(1+z)\) where \(d\) is the coordinate distance to the object from us that we are observing in the current epoch. Spectral Flux density: We usually do not measure the total flux from an object, but instead measure the flux in a manner that depends on how the flux is spread out in frequency. Thus a useful concept is the spectral flux density, \(S\), that quantifies how much flux there is per unit frequency. The units of spectral flux density are erg/s/m\(^2\)/Hz. Where Hz is the unit of frequency called Hertz, equal to 1/s. Apparent magnitude: Astronomers often use apparent magnitude, \(m\), instead of flux. The apparent magnitude has a logarithmic dependence on flux; the reason for this is historical, and is fundamentally due to the logarithmic sensitivity of our eyes to flux. Not only is it logarithmic instead of linear, but brighter objects have smaller magnitudes. This is because the Greeks defined the brightest stars as stars of the first magnitude, and next brightest as stars of the 2nd magnitude, down to the stars we could just barely see at all, which are stars of the 6th magnitude. This ancient system, updated with precise definitions related to flux is still in use today (otherwise I would not bother telling you about it). One way of relating apparent magnitude to flux is the following: \[m = M_{\rm Sun} - 2.5 \log_{10}\left(\frac{F}{F_{\rm Sun 10}}\right)\] where \(M_{\rm Sun} = 4.76\) is the absolute magnitude of the Sun (see next definition) and \(F_{\rm Sun 10}\) is the flux we would get from the Sun if it were 10pc away. Since \(L_{\rm Sun} = 3.826 \times 10^{33}\) erg/sec and 1pc = \(3.0856 \times 10^{18}\) cm we get \(F_{\rm Sun 10} = 3.198 \times 10^{-7}\) erg/cm\(^2\)/sec. Note that because of the -2.5 factor in front of the \(\log_{10}\), if the flux increases by a factor of 10, the apparent magnitude decreases by -2.5. Conversely, if the magnitude increases by 1, the flux decreases by a factor of \(10^{1/2.5} = 10^{0.4}\). Absolute Magnitude: The absolute magnitude of an object, denoted by \(M\), is another way of expressing its luminosity. It is designed so that the apparent magnitude and absolute magnitude of an object are the same, for an object at 10pc. Therefore: \[M = M_{\rm Sun} - 2.5\log_{10}\left(\frac{L}{L_{\rm Sun}}\right). \] Distance Modulus: The distance modulus is defined as \(\mu \equiv m - M\). Note that as a difference between apparent and absolute magnitudes, this is equal to a log of the ratio of flux and luminosity. By plugging in the definitions above of \(m\) and \(M\) one finds \[ \mu = 5\log_{10}\left(\frac{d_L}{10 {\rm pc}}\right)= 5\log_{10}\left(\frac{d_L}{1 {\rm pc}}\right) - 5\]
After visiting some of the applications of different aspects of atomic physics, we now return to the basic theory that was built upon Bohr’s atom. Einstein once said it was important to keep asking the questions we eventually teach children not to ask. Why is angular momentum quantized? You already know the answer. Electrons have wave-like properties, as de Broglie later proposed. They can exist only where they interfere constructively, and only certain orbits meet proper conditions, as we shall see in the next module. Following Bohr’s initial work on the hydrogen atom, a decade was to pass before de Broglie proposed that matter has wave properties. The wave-like properties of matter were subsequently confirmed by observations of electron interference when scattered from crystals. Electrons can exist only in locations where they interfere constructively. How does this affect electrons in atomic orbits? When an electron is bound to an atom, its wavelength must fit into a small space, something like a standing wave on a string. (See Figure.) Allowed orbits are those orbits in which an electron constructively interferes with itself. Not all orbits produce constructive interference. Thus only certain orbits are allowed—the orbits are quantized. Figure \(\PageIndex{1}\): (a) Waves on a string have a wavelength related to the length of the string, allowing them to interfere constructively. (b) If we imagine the string bent into a closed circle, we get a rough idea of how electrons in circular orbits can interfere constructively. (c) If the wavelength does not fit into the circumference, the electron interferes destructively; it cannot exist in such an orbit. For a circular orbit, constructive interference occurs when the electron’s wavelength fits neatly into the circumference, so that wave crests always align with crests and wave troughs align with troughs, as shown in Figure (b). More precisely, when an integral multiple of the electron’s wavelength equals the circumference of the orbit, constructive interference is obtained. In equation form, the condition for constructive interference and an allowed electron orbit is \[n \lambda_n = 2 \pi r_n (n = 1, \, 2, \, 3, ...),\] where \(\lambda_n\) is the electron’s wavelength and \(r_n\) is the radius of that circular orbit. The de Broglie wavelength is \(\lambda = h/p = h/mv\), and so here \(\lambda = h/m_e v\). Substituting this into the previous condition for constructive interference produces an interesting result: \[\dfrac{nh}{m_ev} = 2\pi r_n.\] Rearranging terms, and noting that \(L = mvr\) for a circular orbit, we obtain the quantization of angular momentum as the condition for allowed orbits: \[L = m_e vr_n = n\dfrac{h}{2\pi} (n = 1, \, 2, \, 3, ...).\] This is what Bohr was forced to hypothesize as the rule for allowed orbits, as stated earlier. We now realize that it is the condition for constructive interference of an electron in a circular orbit. Figure illustrates this for \(n = 3\) and \(n = 4\). Waves and Quantization The wave nature of matter is responsible for the quantization of energy levels in bound systems. Only those states where matter interferes constructively exist, or are “allowed.” Since there is a lowest orbit where this is possible in an atom, the electron cannot spiral into the nucleus. It cannot exist closer to or inside the nucleus. The wave nature of matter is what prevents matter from collapsing and gives atoms their sizes. Figure \(\PageIndex{2}\):The third and fourth allowed circular orbits have three and four wavelengths, respectively, in their circumferences. Because of the wave character of matter, the idea of well-defined orbits gives way to a model in which there is a cloud of probability, consistent with Heisenberg’s uncertainty principle. Figure shows how this applies to the ground state of hydrogen. If you try to follow the electron in some well-defined orbit using a probe that has a small enough wavelength to get some details, you will instead knock the electron out of its orbit. Each measurement of the electron’s position will find it to be in a definite location somewhere near the nucleus. Repeated measurements reveal a cloud of probability like that in the figure, with each speck the location determined by a single measurement. There is not a well-defined, circular-orbit type of distribution. Nature again proves to be different on a small scale than on a macroscopic scale. Figure \(\PageIndex{3}\): The ground state of a hydrogen atom has a probability cloud describing the position of its electron. The probability of finding the electron is proportional to the darkness of the cloud. The electron can be closer or farther than the Bohr radius, but it is very unlikely to be a great distance from the nucleus. There are many examples in which the wave nature of matter causes quantization in bound systems such as the atom. Whenever a particle is confined or bound to a small space, its allowed wavelengths are those which fit into that space. For example, the particle in a box model describes a particle free to move in a small space surrounded by impenetrable barriers. This is true in blackbody radiators (atoms and molecules) as well as in atomic and molecular spectra. Various atoms and molecules will have different sets of electron orbits, depending on the size and complexity of the system. When a system is large, such as a grain of sand, the tiny particle waves in it can fit in so many ways that it becomes impossible to see that the allowed states are discrete. Thus the correspondence principle is satisfied. As systems become large, they gradually look less grainy, and quantization becomes less evident. Unbound systems (small or not), such as an electron freed from an atom, do not have quantized energies, since their wavelengths are not constrained to fit in a certain volume. PHET EXPLORATIONS: QUANTUM WAVE INTERFERENCE When do photons, electrons, and atoms behave like particles and when do they behave like waves? Watch waves spread out and interfere as they pass through a double slit, then get detected on a screen as tiny dots. Use quantum detectors to explore how measurements change the waves and the patterns they produce on the screen. Figure \(\PageIndex{4}\): Quantum Wave Interference Summary Quantization of orbital energy is caused by the wave nature of matter. Allowed orbits in atoms occur for constructive interference of electrons in the orbit, requiring an integral number of wavelengths to fit in an orbit’s circumference; that is, \[n\lambda_n = 2\pi r_n (n = 1, \, 2, \, 3, ...),\] where \(\lambda_n\) is the electron’s de Broglie wavelength. Owing to the wave nature of electrons and the Heisenberg uncertainty principle, there are no well-defined orbits; rather, there are clouds of probability. Bohr correctly proposed that the energy and radii of the orbits of electrons in atoms are quantized, with energy for transitions between orbits given by \[\Delta E = hf = E_i - E_f,\] where \(\Delta E\) is the change in energy between the initial and final orbits and \(hf\) is the energy of an absorbed or emitted photon. It is useful to plot orbit energies on a vertical graph called an energy-level diagram. The allowed orbits are circular, Bohr proposed, and must have quantized orbital angular momentum given by \[L = m_evr_n = n\dfrac{h}{2\pi} (n = 1, \, 2, \, 3, . . .),\] where \(L\) is the angular momentum, \(r_n\) is the radius of orbit \(n\), and \(h\) is Planck’s constant. Contributors Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
The short answer is: I have no idea. It is fun to think about. Moving along the demand curve is analogous to an isothermal process and there is an additional law for an isoentropic process:$$ P (Q^s)^{1\mp1/\kappa} = \text{constant} $$ One interesting idea is that if $\kappa \sim 1$, (as might be inferred from the price elasticities of supply/demand), then an isoentropic process could obey$$ P (Q^s)^{1 - 1/\kappa} \simeq P (Q^s)^{0} = P = \text{constant} $$ So that whenever you are experiencing sticky prices, maybe the market is undergoing an isoentropic process where the number of microstates that are consistent with the given macrostate is constant. This doesn't mean supply and demand don't change; it just means that there are the same number of microstates that describe the earlier market and the later market. In thermodynamics, an isoentropic process is also reversible. Both signs are allowed in principle and in the thermodynamics case the "plus" gives you an exponent of 5/3. PS In an ideal gas in two dimensions, $\kappa = 2 \cdot 1/2 = 1$ where there are two degrees of freedom and the 1/2 comes from $\langle x^2 \rangle / \langle 1 \rangle$ with a Gaussian distribution. However, again, the plus sign is what gives the result that agrees with experiment.
Take a Klein-Gordon (KG) equation for a model exercise: \begin{equation}\frac{\partial^2 u}{\partial t^2}=c^2\frac{\partial^2 u}{\partial x^2 } - \Omega^2 u,\end{equation} with boundary and initial conditions: \begin{equation}u(x,t)\in[0,a]\times[0,\infty]\end{equation} \begin{equation}u(x,0)=\alpha(x)\end{equation} \begin{equation}\frac{\partial u}{\partial t}(x,0)=\beta(x)\end{equation} \begin{equation}u(0,t)=0\end{equation} \begin{equation}u(a,t)=0\end{equation} So nothing special until here. I've learnt two methods for this, one of the is separation of variables, which nicely gives the dispersion of the wave as $\omega^2 = c^2 k^2 + \Omega^2$ and its solution as a Fourier series: \begin{equation}u(x,t)=\sum_n\sin (k_n x)\left[A_n\sin(\omega_n t) + B_n\cos(\omega_nt)\right]\end{equation} The second method starts by supposing the solution can be written in the form $e^{-i\omega t}e^{ikx}$ the minus sign being there because of convention. Substitituing this into the KG equation I get $(-i\omega)^2 = c^2(ik)^2 - \Omega^2$ which gives the dispersion of the wave, but then I just don't know how to proceed. If I express $\omega_{\pm}=\pm\sqrt{c^2 k^2 + \Omega^2}$, and wite the solution as $u(x,t)=\int dk e^{ikx}\left(A(k)e^{i\omega_+t}+B(k)e^{i\omega_-t}\right)$, I reach a dead end, and can not go on to write the solution as a Fourier series. Furthermore I get (from the initial conditions): \begin{equation}u(x,0)=\int dk e^{ikx}(A(k)+B(k)) = \alpha(x) \rightarrow A(k)+ B(k) = \frac{1}{2\pi}\int dx \alpha(x)e^{-ikx}\end{equation} \begin{equation}\dot{u}(x,0)=\int dk e^{ikx}\omega(A(k)+B(k)) = \beta(x) \rightarrow \omega[A(k)+ B(k)] = \frac{1}{2\pi}\int dx \beta(x)e^{-ikx}\end{equation} Thus $\omega = \frac{\int dx \beta(x)e^{-ikx}}{\int dx \alpha(x)e^{-ikx}}$, which is rather strange. Anyway, I don't know how to proceed from here. If I go the other way around, and express $k_{\pm}=\pm\sqrt{\frac{\omega^2-\Omega^2}{c^2}}$, then this will be $e^{i\omega t}(Ae^{ik_+x}+Be^{ik_-x})$ (I don't know why I don't integrate here, but if I do, nothing will come out of this). Using the boundary conditions, I get $u=\sum_n2iA_nsin(k_nx)e^{i\omega t}$ with $k_n=n\pi/a$ as expected, but again I don't know what to do from here, because this gives a completely different Fourier series than the form of the solution (not to mention it is complex). Please point out what am I doing wrong.
Answer Wiki The 4 way stop these occasions (shown in the bottom quit nook where these people overlap) naturally is \=\max(X,Ful)\le x. To accomplish any calculations, you need to realise \(m\), a rot away parameter. How can I find the CDF for F_=\. The cumulative supply function is \(Delaware(A 7 | By 5)\). Say X_i \sim \text (0,One)Dollar, and then naturally the particular likelihood \min(X_1\dots X_n)\leq 1 should be equal to 1 (as the bare minimum cost could often be lower than Just one since 0\leq X_i\leq 1 for many i). In a little metropolis, the quantity of motor vehicle collisions happen which has a Poisson supply for an typical regarding a few each week. The likelihood which a laptop or computer element will last a lot more than several a long time is actually 2.4966. Take note that we are involved simply the speed at which calling come in, and we are ignoring the time allocated to the telephone. Solve pertaining to \(okay: p Equates to \dfrac Sixteen.1\) years Answer Wiki This theoretical indicate will be four minutes. Imagine that the time this elapses collected from one of call up yet another offers the hugh syndication. The number of days ahead tourists pay for its flight tickets is often attributes by a great submitting together with the typical period of time add up to 20 times. In identically, the particular CDF on the more n separate exponential haphazard specifics is the product or service of your n unique CDFs: for virtually all \(3rd thererrrs r \geq 0\) and \(capital t \geq 0\). The syndication note is usually \(A \sim Exp(m)\). A snowballing do my essay submitting purpose \(P(Times \leq p)\) could be computed utilizing the TI-83, 83+,Eighty-four, 84+ car loan calculator with all the order \(\text(\lambda, k)\). We many thanks for reviews on the way to enhance Yahoo Search. The ideal cost for the y-axis is usually m. Values for the hugh arbitrary varied happens to this approach. If your cdf with X_i will be denoted by simply F(by)Usd, then the cdf with the lowest emerged by 1-[1-F(back button)]^n. Using the answer through part a, we view that is required \((Twelve)(8) Equals 84\) mere seconds for seven motor vehicles to move by means of. On ordinary, the quantity of mere seconds elapse concerning a couple of consecutive automobiles? Calculate the actual probability that there are essentially 2 accidents appear in a 7 days. Which is larger, a imply or average? Eighty per cent pc elements very last at most the span of time? Shade the area signifying the particular likelihood that one student has got bestessay4u a lot less than Money.40 in their pocket or maybe wallet. What exactly is \(m\), \(\mu\), and \(\sigma\)? What’s the chance that some people are happy to drive a lot more than 30 distance? Inside signs we’re saying this \(G(Y x + nited kingdom | Times by) Equates to Delaware(By okay)\) Poisson distribution If you find a well-known regular regarding \(\lambda\) situations developing every unit occasion, that activities will be independent of 1 another, then your variety of occasions \(X\) manifesting within a model of energy contains the Poisson circulation. If a X_i’s usually are i Have each individual course new member matter the alteration they have in his or her back pocket or maybe designer purse. They usually are interchanged from the earlier calculations if F is definitely continual, nonetheless otherwise they’ve created an impact. Then again, in the event the range of occasions for each product time period employs the Poisson submitting, then the amount of time involving events comes after this dramatical syndication. This theoretical necessarily mean is definitely a number of a matter of minutes. Then estimate the actual indicate. \(P(y th percentile Equates to Ten.40 memoryless property For the dramatical randomly varying \(X\), the actual memoryless property owner the actual statement this comprehension of what has took place earlier times doesn’t have a influence on long term possibilities. Just what is the probability that she / he will pay out at the least a different a few units while using mail maid of honor? Basically, your element remains like new until the item quickly pauses. On the property display, enter in \(\dfrac As an example, guess that typically 31 buyers an hour get local store plus the time frame between arrivals can be dramatically handed out. Values with an rapid random adjustable exist in the next approach. The supply with regard to \(X\) is concerning great together with imply, \(\mu =\) _______ along with \(t =\) _______. Additional these comprise of the gap, within a few minutes, connected with telephone long distance business telephone calls, and the period, inside several weeks, a car battery power lasts. Permit \(x =\) the time period (in years) your working computer portion lasts.
1.2E: Exercises Exercises For the following exercises, for each pair of points, a. find the slope of the line passing through the points and b. indicate whether the line is increasing, decreasing, horizontal, or vertical. 59) \((-2,4)\) and \((1,1)\) Answer: a. −1 b. Decreasing Answer: \(a. −1 \\b. Decreasing\) 60) \((-1,4)\) and \((3,-1)\) 61) \((3,5)\) and \((-1,2)\) Answer: a. 3/4 b. Increasing 62) \((6,4)\) and \((4,-3)\) 63) \((2,3)\) and \((5,7)\) Answer: a. 4/3 b. Decreasing 64) \((1,9)\) and \((-8,5)\) 65) \((2,4)\) and \((1,4)\) Answer: a. 0 b. Horizontal 66) \((1,4)\) and \((1,0)\) For the following exercises, write the equation of the line satisfying the given conditions in slope-intercept form. 67) Slope =\(−6\), passes through \((1,3)\) Answer: \(y=−6x+9\) 68) Slope =\(3\), passes through \((-3,2)\) 69) Slope =\(\frac{1}{3}\), passes through \((0,4)\) Answer: \(y=\frac{1}{3}x+4\) 70) Slope =\(\frac{2}{5}\), \(x\)-intercept =\(8\) 71) Passing through \((2,1\) and \((−2,−1)\) Answer: \(y=\frac{1}{2}x\) Solution: \(y=\frac{1}{2}x\) 72) Passing through \((−3,7)\) and \((1,2)\) 73) \(x\)-intercept =\(5\) and \(y\)-intercept =\(−3\) Answer: \(y=\frac{3}{5}x−3\) 74) \(x\)-Intercept =−\(6\) and \(y\)-intercept =\(9\) For the following exercises, for each linear equation, a. give the slope \(m\) and \(y\)-intercept b, if any, and b. graph the line. 75) \(y=2x−3\) Answer: a. \((m=2,b=−3)\) b. 76) \(y=−\frac{1}{7}x+1\) 77) \(f(x)=-6x\) Answer: a. \((m=−6,b=0)\) b. a. \((m=−6,b=0)\) b. 78) \(f(x)=−5x+4\) 79) \(4y+24=0\) Answer: a. \(m=0\) b. 80) \(8x-4=0\) 81) \(2x+3y=6\) Answer: a. \((m=−\frac{2}{3},b=2)\) b. 82) \(6x−5y+15=0\) For the following exercises, for each polynomial, a. find the degree; b. find the zeros, if any; c. find the \(y\)-intercept(s), if any; d. use the leading coefficient to determine the graph’s end behavior; and e. determine algebraically whether the polynomial is even, odd, or neither. 83) \(f(x)=2x^2−3x−5\) Answer: \(a. 2\\ b. \frac{5}{2},−1; \\ c. −5 \\ d. Both\; ends \; rise \\ e. Neither \) 84) \(f(x)=−3x^2+6x\) 85) \(f(x)=\frac{1}{2}x^2−1\) Answer: \(a. 2 \\ b. ±\sqrt{2} \\ c. −1 \\ d. Both \; ends\; rise \\ e. Even\) 86) \(f(x)=x^3+3x^2−x−3\) 87) \(f(x)=3x−x^3\) Answer: \(a. 3 \\ b. 0, ±\sqrt{3} \\ c. 0 \\ d. \mbox{ Left end rises, right end falls } \\ e. Odd\) For the following exercises, use the graph of \(f(x)=x^2\) to graph each transformed function \(g\). Exercise: 88) \(g(x)=x^2−1\) 89) \(g(x)=(x+3)^2+1\) Answer: For the following exercises, use the graph of \(f(x)=\sqrt{x}\) to graph each transformed function \(g\). 90) \(g(x)=\sqrt{x+2}\) 91) \(g(x)=−\sqrt{x}−1\) Answer: For the following exercises, use the graph of \(y=f(x)\) to graph each transformed function \(g\). 92) \(g(x)=f(x)+1\) 93) \(g(x)=f(x−1)+2\) Answer: For the following exercises, for each of the piecewise-defined functions, a. evaluate at the given values of the independent variable and b. sketch the graph. 94) \(f(x)=\begin{cases}4x+3, & x≤0\\ -x+1, & x>0\end{cases} ;f(−3);f(0);f(2)\) 95) \(f(x)=\begin{cases}x^2-3, & x≤0\\ 4x+3, & x>0\end{cases} ;f(−4);f(0);f(2)\) Answer: a. \(13,−3,5\) b. 96) \(h(x)=\begin{cases}x+1, &x≤5\\4, &x>5\end{cases} ;h(0);h(π);h(5)\) 97) \(g(x)=\begin{cases}\frac{3}{x−2}, &x≠2\\4, &x=2\end{cases} ;g(0);g(−4);g(2)\) Answer: a. \(\frac{−3}{2},\frac{−1}{2},4\) b. For the following exercises, determine whether the statement is true or false. Explain why. 98) \(f(x)=(4x+1)/(7x−2)\) is a transcendental function. 99) \(g(x)=\sqrt[3]{x}\) is an odd root function Answer: True; \(n=3\) 100) A logarithmic function is an algebraic function. 101) A function of the form \(f(x)=x^b\), where \(b\) is a real valued constant, is an exponential function. Answer: False; \(f(x)=x^b\), where \(b\) is a real-valued constant, is a power function 102) The domain of an even root function is all real numbers. 103) [T] A company purchases some computer equipment for $20,500. At the end of a 3-year period, the value of the equipment has decreased linearly to $12,300. a. Find a function \(y=V(t)\) that determines the value V of the equipment at the end of t years. b. Find and interpret the meaning of the \(x\)- and \(y\)-intercepts for this situation. c. What is the value of the equipment at the end of 5 years? d. When will the value of the equipment be $3000? Answer: \(a. V(t)=−2733t+20500 \\ b. (0,20,500) \mbox{ means that the initial purchase price of the equipment is } $20,500; (7.5,0) \mbox{ means that in 7.5 years the computer equipment has no value. } \\ c. $6835 \\ d. \mbox{ In approximately } 6.4 \mbox{ years.}\) 104) [T] Total online shopping during the Christmas holidays has increased dramatically during the past 5 years. In 2012 \((t=0)\),total online holiday sales were $42.3 billion, whereas in 2013 they were $48.1 billion. a. Find a linear function S that estimates the total online holiday sales in the year t. b. Interpret the slope of the graph of S. c. Use part a. to predict the year when online shopping during Christmas will reach $60 billion. 105) [T] A family bakery makes cupcakes and sells them at local outdoor festivals. For a music festival, there is a fixed cost of $125 to set up a cupcake stand. The owner estimates that it costs $0.75 to make each cupcake. The owner is interested in determining the total cost \(C\) as a function of number of cupcakes made. a. Find a linear function that relates cost C to x, the number of cupcakes made. b. Find the cost to bake 160 cupcakes. c. If the owner sells the cupcakes for $1.50 apiece, how many cupcakes does she need to sell to start making profit? (Hint: Use the INTERSECTION function on a calculator to find this number.) Answer: \( a. C=0.75x+125 \\ b. $245 \\ c. 167 cupcakes\) 106) [T] A house purchased for $250,000 is expected to be worth twice its purchase price in 18 years. a. Find a linear function that models the price P of the house versus the number of years t since the original purchase. b. Interpret the slope of the graph of P. c. Find the price of the house 15 years from when it was originally purchased. 107) [T] A car was purchased for $26,000. The value of the car depreciates by $1500 per year. a. Find a linear function that models the value V of the car after t years. b. Find and interpret \(V(4)\). Answer: a. \(V(t)=−1500t+26,000\) b. In 4 years, the value of the car is $20,000. 108) [T] A condominium in an upscale part of the city was purchased for $432,000. In 35 years it is worth $60,500. Find the rate of depreciation. 109) [T] The total cost C (in thousands of dollars) to produce a certain item is modeled by the function \(C(x)=10.50x+28,500\), where x is the number of items produced. Determine the cost to produce 175 items. Answer: $30,337.50 110) [T] A professor asks her class to report the amount of time t they spent writing two assignments. Most students report that it takes them about 45 minutes to type a four-page assignment and about 1.5 hours to type a nine-page assignment. a. Find the linear function \(y=N(t)\) that models this situation, where \(N\) is the number of pages typed and t is the time in minutes. b. Use part a. to determine how many pages can be typed in 2 hours. c. Use part a. to determine how long it takes to type a 20-page assignment. 111) [T] The output (as a percent of total capacity) of nuclear power plants in the United States can be modeled by the function \(P(t)=1.8576t+68.052\), where t is time in years and \(t=0\) corresponds to the beginning of 2000. Use the model to predict the percentage output in 2015. Answer: 96% of the total capacity 112) [T] The admissions office at a public university estimates that 65% of the students offered admission to the class of 2019 will actually enroll. a. Find the linear function \(y=N(x)\), where \(N\) is the number of students that actually enroll and \(x\) is the number of all students offered admission to the class of 2019. b. If the university wants the 2019 freshman class size to be 1350, determine how many students should be admitted.
Newton-Mercator Series Contents Theorem Let $\ln x$ be the natural logarithm function. Then: \(\displaystyle \ln \left({1 + x}\right)\) \(=\) \(\displaystyle x - \dfrac {x^2} 2 + \dfrac {x^3} 3 - \dfrac {x^4} 4 + \cdots\) \(\displaystyle \) \(=\) \(\displaystyle \sum_{n \mathop = 1}^\infty \frac {\left({-1}\right)^{n + 1} } n x^n\) This is known as the Newton-Mercator series. Proof From Sum of Infinite Geometric Progression, we know that: $\displaystyle \sum_{n \mathop = 0}^\infty x^n$ converges to $\dfrac 1 {1 - x}$ for $\left\vert{x}\right\vert <1$ which implies that: $\displaystyle \sum_{n \mathop = 0}^\infty (-1)^n x^n$ converges to $\dfrac 1 {1 + x}$ We also know from Definition:Natural Logarithm that: $\ln(x+1)=\displaystyle \int_0^x \frac {\mathrm dt} {1+t}$ Combining these facts, we get: $\ln(x+1)=\displaystyle \int_0^x \displaystyle \sum_{n \mathop = 0}^\infty (-1)^n t^n dt$ From Linear Combination of Integrals, we can rearrange this to $\displaystyle \sum_{n \mathop = 0}^\infty (-1)^n \displaystyle \int_0^x t^n dt$ Then, using Integral of Power: $\displaystyle \sum_{n \mathop = 0}^\infty \dfrac {(-1)^n} {n+1} x^{n+1} $ We can shift $n+1$ into $n$: $\displaystyle \sum_{n \mathop = 1}^\infty \dfrac {(-1)^{n-1}} {n} x^{n} $ This is equivalent to: $ \displaystyle \sum_{n \mathop = 1}^\infty \dfrac {(-1)^{n+1}} {n} x^{n} $ Finally, we check the bounds $x=1$ and $x=-1$. For $x=-1$, we get: $\displaystyle \sum_{n \mathop = 1}^\infty \dfrac {(-1)^{n+1}} {n} (-1)^n$ $(-1)^{n+1}$ and $(-1)^n$ will always have different signs, which implies their product will be $-1$. This means we get: $-\displaystyle \sum_{n \mathop = 1}^\infty \dfrac 1 n$ This is the harmonic series which we know to be divergent. We then check $x=1$. We get: $\displaystyle \sum_{n \mathop = 1}^\infty \dfrac {(-1)^{n+1}} {n} $ This is the alternating harmonic series which we know to be convergent. Therefore, we can conclude that: $\ln(x+1)=\displaystyle \sum_{n \mathop = 1}^\infty \dfrac {(-1)^{n+1}} {n} x^{n}$ for $-1 < x \le 1$. $\blacksquare$ $\ln 2 = 1 - \dfrac 1 2 + \dfrac 1 3 - \dfrac 1 4 + \dfrac 1 5 - \cdots$ Also known as The Newton-Mercator series is also known as the Mercator series. Source of Name However, it was also independently discovered by Grégoire de Saint-Vincent.
An RLC circuit is a simple electric circuit with a resistor, inductor and capacitor in it -- with resistance R, inductance Land capacitance C, respectively. It's one of the simplest circuits that displays non-trivial behavior. You can derive an equation for the behavior by using Kirchhoff's laws (conservation of the stocks and flows of electrons) and the properties of the circuit elements. Wikipedia does a fine job. You arrive at a solution for the current as a function of time that looks generically like this (not the most general solution, but a solution): i(t) = A e^{\left( -\alpha + \sqrt{\alpha^{2} - \omega^{2}} \right) t} $$ with $\alpha = R/2L$ and $\omega = 1/\sqrt{L C}$. If you fill in some numbers for these parameters, you can get all kinds of behavior: As you can tell from that diagram, the Kirchhoff conservation laws don't in any way nail down the behavior of the circuit. The values you choose for R, Land Cdo. You could have a slowly decaying current or a quickly oscillating one. It depends on R, Land C. Now you may wonder why I am talking about this on an economics blog. Well, Cullen Roche implicitly asked a question: Although [stock flow consistent models are] widely used in the Fed and on Wall Street it hasn’t made much impact on more mainstream academic economic modeling techniques for reasons I don’t fully know. The reason is that the content of stock flow consistent modeling is identical to Kirchhoff's laws. Currents are flows of electrons (flows of money); voltages are stocks of electrons (stocks of money). Krichhoff's laws do not in any way nail down the behavior of an RLC circuit. SFC models do not nail down the behavior of the economy. If you asked what the impact of some policy was and I gave you the graph above, you'd probably never ask again. What SFC models do in order to hide the fact that anything could result from an SFC model is effectively assume R = L = C = 1, which gives you this: I'm sure to get objections to this. There might even be legitimate objections. But I ask of any would-be objector: How is accounting for money different from accounting for electrons? Before saying this circuit model is in continuous time, note that there are circuits with clock cycles -- in particular the device you are currently reading this post with. I can't for the life of me think of any objection, and I showed exactly this problem with a SFC model from Godley and Lavoie: But to answer Cullen's implicit question -- as the two Nick Rowe is generally better than me at these things. Mathematicanotebooks above show, SFC models don't specify the behavior of an economy without assuming R = L = C = 1 ... that is to say Γ = 1. Update: Nick Rowe is generally better than me at these things.
Longitude lines are perpendicular and latitude lines are parallel to the equator. A geographic coordinate system is a coordinate system that enables every location on the Earth to be specified by a set of numbers or letters. The coordinates are often chosen such that one of the numbers represents vertical position, and two or three of the numbers represent horizontal position. A common choice of coordinates is latitude, longitude and elevation. [1] Contents Geographic latitude and longitude 1 UTM and UPS systems 2 Stereographic coordinate system 3 Geodetic height 4 Cartesian coordinates 5 Shape of the Earth 6 Expressing latitude and longitude as linear units 7 Datums often encountered 8 Geostationary coordinates 9 On other celestial bodies 10 See also 11 Notes 12 References 13 External links 14 Geographic latitude and longitude The "latitude" (abbreviation: Lat., φ, or phi) of a point on the Earth's surface is the angle between the equatorial plane and the straight line that passes through that point and is normal to the surface of a reference ellipsoid which approximates the shape of the Earth. [n 1] This line passes a few kilometers away from the center of the Earth except at the poles and the equator where it passes through Earth's center. [n 2] Lines joining points of the same latitude trace circles on the surface of the Earth called parallels, as they are parallel to the equator and to each other. The north pole is 90° N; the south pole is 90° S. The 0° parallel of latitude is designated the equator, the fundamental plane of all geographic coordinate systems. The equator divides the globe into Northern and Southern Hemispheres. The "longitude" (abbreviation: Long., λ, or lambda) of a point on the Earth's surface is the angle east or west from a reference meridian to another meridian that passes through that point. All meridians are halves of great ellipses (often improperly called great circles), which converge at the north and south poles. A line, which was intended to pass through the Royal Observatory, Greenwich (a suburb of London, UK), was chosen as the international zero-longitude reference line, the Prime Meridian. Places to the east are in the eastern hemisphere, and places to the west are in the western hemisphere. The antipodal meridian of Greenwich is both 180°W and 180°E. The zero/zero point is located in the Gulf of Guinea about 625 km south of Tema, Ghana. In 1884 the United States hosted the Mathematics Topics-Coordinate Systems Geographic coordinates of countries (CIA World Factbook) Coordinates conversion tool (batch conversions of Decimal, DM, DMS and UTM) FCC coordinates conversion tool (DD to DMS/DMS to DD) Coordinate converter, formats: DD, DMS, DM Latitude and Longitude External links Portions of this article are from Jason Harris' "Astroinfo" which is distributed with KStars, a desktop planetarium for Linux/KDE. See The KDE Education Project - KStars ^ a b c d e f A Guide to coordinate systems in Great Britain v1.7 October 2007 D00659 accessed 14.4.2008 ^ Greenwich 2000 Limited (9 June 2011). "The International Meridian Conference". Wwp.millennium-dome.com. Retrieved 31 October 2012. ^ DMA Technical Report Geodesy for the Layman, The Defense Mapping Agency, 1983 ^ "Making maps compatible with GPS". Government of Ireland 1999. Archived from the original on 21 July 2011. Retrieved 15 April 2008. References ^ The surface of the Earth is closer to an ellipsoid than to a sphere, as its equatorial diameter is larger than its north-south axis. ^ The greatest distance between an ellipsoid normal and the center of the Earth is 21.9 km at a latitude of 45°, using Earth radius#Radius at a given geodetic latitude and Latitude#Numerical comparison of auxiliary latitudes: (6367.5 km)×tan(11.67')=21.9 km. ^ The French Institut Géographique National (IGN) maps still use longitude from a meridian passing through Paris, along with longitude from Greenwich. ^ WGS 84 is the default datum used in most GPS equipment, but other datums can be selected. Notes See also Similar coordinate systems are defined for other celestial bodies such as: On other celestial bodies Geostationary satellites (e.g., television satellites) are over the equator at a specific point on Earth, so their position related to Earth is expressed in longitude degrees only. Their latitude is always zero, that is, over the equator. Geostationary coordinates In popular GIS software, data projected in latitude/longitude is often represented as a 'Geographic Coordinate System'. For example, data in latitude/longitude if the datum is the North American Datum of 1983 is denoted by 'GCS North American 1983'. Latitude and longitude values can be based on different geodetic systems or mapping system can sometimes be roughly changed into another datum using a simple translation. For example, to convert from ETRF89 (GPS) to the Irish Grid add 49 metres to the east, and subtract 23.4 metres from the north. [4] More generally one datum is changed into any other datum using a process called Helmert transformations. This involves converting the spherical coordinates into Cartesian coordinates and applying a seven parameter transformation (translation, three-dimensional rotation), and converting back. [1] Datums often encountered Longitudinal length equivalents at selected latitudes Latitude City Degree Minute Second ±0.0001° 60° Saint Petersburg 55.80 km 0.930 km 15.50 m 5.58 m 51° 28' 38" N Greenwich 69.47 km 1.158 km 19.30 m 6.95 m 45° Bordeaux 78.85 km 1.31 km 21.90 m 7.89 m 30° New Orleans 96.49 km 1.61 km 26.80 m 9.65 m 0° Quito 111.3 km 1.855 km 30.92 m 11.13 m where Earth's equatorial radius a equals 6,378,137 m and \scriptstyle{\tan \beta = \frac{b}{a}\tan\phi}\,\!; for the GRS80 and WGS84 spheroids, b/a calculates to be 0.99664719. (\scriptstyle{\beta}\,\! is known as the reduced (or parametric) latitude). Aside from rounding, this is the exact distance along a parallel of latitude; getting the distance along the shortest route will be more work, but those two distances are always within 0.6 meter of each other if the two points are one degree of longitude apart. \frac{\pi}{180}a \cos \beta \,\! where Earth's average meridional radius \scriptstyle{M_r}\,\! is 6,367,449 m. Since the Earth is not spherical that result can be off by several tenths of a percent; a better approximation of a longitudinal degree at latitude \scriptstyle{\phi}\,\! is \frac{\pi}{180}M_r\cos \phi \! To estimate the length of a longitudinal degree at latitude \scriptstyle{\phi}\,\! we can assume a spherical Earth (to get the width per minute and second, divide by 60 and 3600, respectively): (Those coefficients can be improved, but as they stand the distance they give is correct within a centimeter.) 111132.954 - 559.822\, \cos 2\varphi + 1.175\, \cos 4\varphi On the WGS84 spheroid, the length in meters of a degree of latitude at latitude φ (that is, the distance along a north-south line from latitude (φ - 0.5) degrees to (φ + 0.5) degrees) is about On the GRS80 or WGS84 spheroid at sea level at the equator, one latitudinal second measures 30.715 metres, one latitudinal minute is 1843 metres and one latitudinal degree is 110.6 kilometres. The circles of longitude, meridians, meet at the geographical poles, with the west-east width of a second naturally decreasing as latitude increases. On the equator at sea level, one longitudinal second measures 30.92 metres, a longitudinal minute is 1855 metres and a longitudinal degree is 111.3 kilometres. At 30° a longitudinal second is 26.76 metres, at Greenwich (51° 28' 38" N) 19.22 metres, and at 60° it is 15.42 metres. Expressing latitude and longitude as linear units The Earth is not static as points move relative to each other due to continental plate motion, subsidence, and diurnal movement caused by the Moon and the tides. The daily movement can be as much as a metre. Continental movement can be up to 10 cm a year, or 10 m in a century. A weather system high-pressure area can cause a sinking of 5 mm. Scandinavia is rising by 1 cm a year as a result of the melting of the ice sheets of the last ice age, but neighbouring Scotland is rising by only 0.2 cm. These changes are insignificant if a local datum is used, but are statistically significant if the global GPS datum is used. [1] Though early navigators thought of the sea as a flat surface that could be used as a vertical datum, this is not actually the case. The Earth has a series of layers of equal potential energy within its gravitational field. Height is a measurement at right angles to this surface, roughly toward the centre of the Earth, but local variations make the equipotential layers irregular (though roughly ellipsoidal). The choice of which layer to use for defining height is arbitrary. The reference height that has been chosen is the one closest to the average height of the world's oceans. This is called the geoid. [1] [3] The Earth is not a sphere, but an irregular shape approximating a biaxial ellipsoid. It is nearly spherical, but has an equatorial bulge making the radius at the equator about 0.3% larger than the radius measured through the poles. The shorter axis approximately coincides with axis of rotation. Map-makers choose the true ellipsoid that best fits their need for the area they are mapping. They then choose the most appropriate mapping of the spherical coordinate system onto that ellipsoid. In the United Kingdom there are three common latitude, longitude, height systems in use. The system used by GPS, WGS84, differs at Greenwich from the one used on published maps OSGB36 by approximately 112m. The military system ED50, used by NATO, differs by about 120m to 180m. [1] Shape of the Earth An example is the NGS data for a brass disk near Donner Summit, in California. Given the dimensions of the ellipsoid, the conversion from lat/lon/height-above-ellipsoid coordinates to X-Y-Z is straightforward—calculate the X-Y-Z for the given lat-lon on the surface of the ellipsoid and add the X-Y-Z vector that is perpendicular to the ellipsoid there and has length equal to the point's height above the ellipsoid. The reverse conversion is harder: given X-Y-Z we can immediately get longitude, but no closed formula for latitude and height exists. See "Geodetic system." Using Bowring's formula in 1976 Survey Review the first iteration gives latitude correct within 10^{-11} degree as long as the point is within 10000 meters above or 5000 meters below the ellipsoid. Z-axis along the axis of the ellipsoid, positive northward X- and Y-axis in the plane of the equator, X-axis positive toward 0 degrees longitude and Y-axis positive toward 90 degrees east longitude. With the origin at the center of the ellipsoid, the conventional setup is the expected right-hand: Every point that is expressed in ellipsoidal coordinates can be expressed as an x y z (Cartesian) coordinate. Cartesian coordinates simplify many mathematical calculations. The origin is usually the center of mass of the earth, a point close to the Earth's center of figure. Cartesian coordinates To completely specify a location of a topographical feature on, in, or above the Earth, one has to also specify the vertical distance from the centre of the Earth, or from the surface of the Earth. Because of the ambiguity of "surface" and "vertical", it is more commonly expressed relative to a precisely defined vertical datum which holds fixed some known point. Each country has defined its own datum. For example, in the United Kingdom the reference point is Newlyn, while in Canada, Mexico and the United States, the point is near Rimouski, Quebec, Canada. The distance to Earth's centre can be used both for very deep positions and for positions in space. [1] Geodetic height Although no longer used in navigation, the stereographic coordinate system is still used in modern times to describe crystallographic orientations in the fields of crystallography, mineralogy and materials science. During medieval times, the stereographic coordinate system was used for navigation purposes. The stereographic coordinate system was superseded by the latitude-longitude system. Stereographic coordinate system The Universal Transverse Mercator (UTM) and Universal Polar Stereographic (UPS) coordinate systems both use a metric-based cartesian grid laid out on a conformally projected surface to locate positions on the surface of the Earth. The UTM system is not a single map projection but a series of map projections, one for each of sixty 6-degree bands of longitude. The UPS system is used for the polar regions, which are not covered by the UTM system. UTM and UPS systems The combination of these two components specifies the position of any location on the planet, but does not consider altitude nor depth. This latitude/longitude "webbing" is known as the graticule. A graticule representing latitude and longitude of the Earth does not constitute a hierarchy of geographical areas. This is to say, it is not an arrangement of related information or data. [n 3] This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
One of the things I first noticed when I started studying economics was the strange fact that the macroeconomic AD-AS model has some formal similarities to a simple microeconomic supply and demand model. In the AD-AS model, the sum of hundreds of markets for hundreds of goods behaves a bit like a single market for a single good with inflation representing the changes in price. The IS-LM model is another diagrammatic model that is formally similar supply and demand. If you think about it, there is no real reason this has to be true. For example, DSGE models represent macro models that are not formally similar to supply and demand diagrams. Indeed, it is essentially the fallacy of composition to assume the whole works the same way as the sum of its parts. But the other interesting thing is that complex systems frequently exhibit this kind of self-similarity (just Google "self-similarity in nature" for a taste). I was thinking about that while I was playing around with the partition function/ensemble equations for the information equilibrium model on a flight home from DC the other day. My original intent was to try to answer the question of which distribution of states preserves the scale invariance of the individual markets (so as to maybe suggest which distribution we should think about with regard to statistical equilibrium), but I ended up deriving a fascinating result. Most of the preceding was just to set the mood. Let's dive in to a short derivation. I will start from here (which also goes into the justification of the starting point). Consider an ensemble of markets (information equilibrium relationship) $A_{i}$ with a single factor of production $B$ and IT indices $k_{i}$ so that \text{(1) }\; \frac{dA_{i}}{dB} = k_{i} \; \frac{A_{i}}{B} $$ The solution for an individual market is $$ A_{i} = A^{(i)}_{0} \left( \frac{B}{B_{0}} \right)^{k_{i}} $$ Now we use the partition function $Z(B)$ to aggregate the markets. The partition function itself is: Z(B) \equiv \sum_{i} e^{-k_{i} \log B/B_{0}} $$ Again, this is justified in the previous link. However the main point is that we assume the economy has a measurable and meaningful growth rate. If economic growth is considered to be a reasonable thing to talk about in equilibrium, then this particular approach codifies that fact in terms of a unique maximum entropy distribution. Now let's look at some pieces we'll need. First there's the ensemble average value of $A$: \begin{align} \langle A \rangle & = \frac{1}{Z} \sum_{i} A^{(i)}_{0} \left( \frac{B}{B_{0}} \right)^{k_{i}} e^{-k_{i} \log B/B_{0}}\\ & = \frac{1}{Z} \sum_{i} A^{(i)}_{0}\\ & \equiv \frac{\alpha}{Z} \end{align} $$ where $\alpha$ is defined as the sum of the coefficients $A^{(i)}_{0}$. The average IT index is \begin{align} \langle k \rangle & = \frac{1}{Z} \sum_{i} k_{i} e^{-k_{i} \log B/B_{0}}\\ & = -\frac{1}{Z} \sum_{i} \frac{d}{d \log B/B_{0}} e^{-k_{i} \log B/B_{0}}\\ & = - \frac{B}{Z} \frac{dZ}{dB} \end{align} $$ Where I used the fact that $d \log B/B_{0} = dB/B$. Finally, the derivative of the average value of $A$ with respect to $B$ is: \begin{align} \frac{d \langle A \rangle}{dB} & = \frac{d}{dB} \frac{\alpha}{Z}\\ & = -\frac{\alpha}{Z^{2}}\frac{dZ}{dB} \end{align} $$ Magically, $$ \langle k \rangle \frac{\langle A \rangle}{B} = -\frac{\alpha}{Z^{2}}\frac{dZ}{dB} $$ So we find: $$ \text{(2) }\; \frac{d \langle A \rangle}{dB} = \langle k \rangle \frac{\langle A \rangle}{B} $$ That is to say the ensemble average (2) is formally similar to the individual markets (1). Equation (2) is much more complex (the IT index depends on $B$), but over short periods of time where $B$ doesn't change quickly the ensemble equation can be treated as a single information equilibrium relationship. Since you can derive a supply and demand diagram from the information equilibrium relationship (under certain assumptions), this could be seen as a short run AD-AS model. But more generally, we have a "self-similarity" in the behavior of individual micro markets and a macro ensemble of markets. PS This basically solves something I was looking into almost exactly three years ago in these three posts: here, here and here. PPS I'm sorry about the slowdown in posting of late. I've become much busier with work (and I'm working on a really fun R&D effort).
This question already has an answer here: for (i=1; i<=n ;i=i*2){ for (j=1; j<=i ;j++){ basic_step; }} Regarding the above nested loops, I can't seem to understand why is the following runtime analysis is correct (specifically the first equality), when $k$ is defined as $\log i$, meaning $i=2^k$. What is the process that one should come up with, in order to decide this setting and the sigmas' indices? While I do understand that the outer loop runs for $\Theta(\log n)$ times, I still can't grasp the setting of $k$. A clarification will be very much appreciated! $$\sum_{k=0}^{\log n}\sum_{j=1}^{2^k}1$$
Definition:Differentiable Functional Definition Let $y, h \in S: \R \to \R$ be real functions. Let $J \sqbrk y$, $\phi \sqbrk {y; h}$ be functionals. $\Delta J \sqbrk {y; h} = \phi \sqbrk {y;h} + \epsilon \size h$ Suppose $\phi \sqbrk {y; h}$ is a linear with respect to $h$ and: $\displaystyle \lim_{\size h \mathop \to 0} \epsilon = 0$ Then the functional $J \sqbrk y $ is said to be differentiable. Notes Pragmatically speaking, as $\size h$ approaches $0$, $\epsilon \size h$ can be replaced with sum of terms that are proportional to $\size h^{1 + \delta}$ for $\delta \in \R$ and $\delta > 0$, such that: $\displaystyle \lim_{\size h \mathop \to 0} \frac {\size h^{1 + \delta} } {\size h} = \lim_{\size h \mathop \to 0} \size h^\delta = 0$ while at the same time $\phi \sqbrk {y; h}$ becomes proportional to $\size h$, and a similar limit approaches one.
I'm a little rusty with my integrals, how may I evaluate the following: $$ \int_0^1{\frac{y}{\sqrt{y(1-y)}}dy} $$ I've tried: $$ \int_0^1{\frac{y}{\sqrt{y(1-y)}}dy} = \int_0^1{\sqrt{\frac{y}{1-y}} dy} $$ Make the substitution z = 1-y $$ = \int_0^1{\sqrt{\frac{1-z}{z}} dz} = \int_0^1{\sqrt{\frac{1}{z} - 1} dz} $$ Which seems simpler but I have not managed to evaluate it. All help appreciated (just a few pointers would be appreciated just as much as the full solution). Thank you EDIT: Solution following Ng Hong Wai suggestion: At $$ \int_0^1{\sqrt{\frac{y}{1-y}} dy} $$ We make the substitution $y = sin^2(\theta)$, so $dy = 2sin(\theta)cos(\theta)d\theta$. This results in: $$ \int_0^\frac{\pi}{2}{2 sin^2(\theta)d\theta} = \int_0^\frac{\pi}{2}{[1-cos(2\theta)]d\theta} = [x-\frac{1}{2}sin(2\theta)]_0^\frac{\pi}{2} = \frac{\pi}{2} $$ Solution according to M. Strochyk's suggestion: $$ \int_0^1{\sqrt{\frac{y}{1-y}} dy} = 2 \int_0^\infty{\frac{u^2}{(1+u^2)^2}du} $$ By decomposing into partial fractions this becomes: $$ 2 \int_0^\infty{\frac{1}{1+u^2}du} - 2 \int_0^\infty{\frac{1}{(1+u^2)^2}du} $$ The first term integrates directly to $arctan(u)$ while the second can be evaluated by making the substitution $u=tan(x)$ and then applying the double-angle formula for cosine. The result is (unsurprisingly): $$ 2[arctan[u]]_0^\infty -[x + \frac{1}{2}sin(2x)]_0^\frac{\pi}{2} = \frac{\pi}{2} $$
I have a simple question. I think not a question is, is a request. This month I have been studying how to understand and implement the Kalman filter algorithm for simple models such as the local level. $ Y_t = \mu_t + \epsilon_t$ $\mu_t = u_{t-1} + \gamma_t$ I managed to implement this model (i don't have ability in programming). Now I'm studying the part of Extended Kalman Filter. And I'm having trouble implementing the model below using the EKF: Power Growth Model: $ Y_t = \mu_t + \epsilon_t$ $\mu_t = \mu_{t-1}^{(v_{t-1}+1)} + \gamma_t$ $v_{t} = \phi v_{t} + \eta_t$ It was then that I thought there would be a scalar model example to estimate using the EKF, with only $\mu_{t}$ and some nonlinear form? I have not found Scalar models as an example of use for EKF. Is there any alternative for the local level model that i could use the EKF to estimating? Any suggestion? Thanks, Laorie.
$w$-Matlis Cotorsion Modules and $w$-Matlis Domains Bull. Korean Math. Soc. Published online August 6, 2019 yongyan pu, Gaohua Tang, and Fanggui WangPanzhihua University, Qinzhou University, Sichuan Normal University Abstract : Let $R$ be a domain with its field $Q$ of quotients. An $R$-module $M$ is said to be weak $w$-projective if $\Ext^1_R(M,N)=0$ for all $N\in \mathcal{P}^{\dag}_w$, where $\mathcal{P}^{\dag}_w$ denotes the class of $\GV$-torsionfree $R$-modules $N$ with the property that $\Ext^k_R(M,N)=0$ for all $w$-projective $R$-modules $M$ and for all integers $k\geq 1$. In this paper, we define a domain $R$ to be $w$-Matlis if the weak $w$-projective dimension of the $R$-module of $Q$ is $\leq1$. To characterize $w$-Matlis domains. We introduce the concept of $w$-Matlis cotorsion modules and study some basic properties of $w$-Matlis modules. Using these concepts, we show that $R$ is a $w$-Matlis domain if and only if $\Ext^k_R(Q,D)=0$ for any $\mathcal{P}^{\dag}_w$-divisible $R$-module $D$ and any integer $k\geq1$ if and only if every $\mathcal{P}^{\dag}_w$-divisible module is $w$-Matlis cotorsion if and only if w.$w$-$\pd_RQ/R\leq1$.
Question: A bucket contains $2$ white and $8$ red marbles. A marble is drawn randomly $10$ times in succession with replacement. Find the probability of drawing more than $7$ red marbles? I think since the marbles are replaced, the probability of selecting a red marble does not change from trial to trail. Am I right on this assumption? Also I think if I calculate the probability of selecting $0,1$ or $2$ white marbles, I can get an answer but I do not know how to approach this. Need help with this Question: A bucket contains $2$ white and $8$ red marbles. A marble is drawn randomly $10$ times in succession with replacement. Find the probability of drawing more than $7$ red marbles? Yes, you are right in that assumption. That means the count of red marbles drawn among the ten has a Binomial Distribution. You should know the probability mass formula for $X\sim\mathcal{Bin}(n,p)$: $$~\Pr(X=k) ~=~ \dbinom {n}k p^k(1-p)^{n-k}\qquad\Big[k\in\{0,..,n\}\Big]$$ In this case, $n=10$, $p=8/10$ The probability of drawing more than seven red marbles, is : $$\Pr(X>7) ~=~ \Pr(X=8)+\Pr(X=9)+\Pr(X=10)$$ You are correct, that the probabilities do not change from trial to trial. There are three scenarios : 0,1 or 2 white balls. Picking $0$ white balls happens with probability $\left(\frac{8}{10}\right)^{10}$. Picking $1$ white ball happens with probability $ 10 \times \frac{2}{10} \left(\frac{8}{10}\right)^{9}$, since the trial on which the white ball is picked can be chosen in $10$ ways. Picking $2$ white ball happens with probability $ \binom{10}2 \times \left(\frac{2}{10}\right)^2 \left(\frac{8}{10}\right)^{8}$, since the trials on which the white balls can be picked , can now be chosen in $\binom{10}{2}$ ways. Hence, the answer is the sum of these i.e. $\binom{10}2 \times \left(\frac{2}{10}\right)^2 \left(\frac{8}{10}\right)^{8} + 10 \times \frac{2}{10} \left(\frac{8}{10}\right)^{9} + \left(\frac{8}{10}\right)^{10}$.
A density matrix, or density operator, is used in quantum theory to describe the statistical state of a quantum system. The formalism was introduced by John von Neumann (according to other sources independently by Lev Landau and Felix Bloch) in 1927. It is the quantum-mechanical analogue to a phase-space density (probability distribution of position and momentum) in classical statistical mechanics. The need for a statistical description via density matrices arises because it is not possible to describe a quantum mechanical system that undergoes general quantum operations such as measurement, using exclusively states represented by ket vectors. In general a system is said to be in a mixed state, except in the case the state is not reducible to a convex combination of other statistical states. In that case it is said to be in a pure state. Typical situations in which a density matrix is needed include: a quantum system in thermal equilibrium (at finite temperatures), nonequilibrium time-evolution that starts out of a mixed equilibrium state, and entanglement between two subsystems, where each individual system must be described by a density matrix even though the complete system may be in a pure state. See quantum statistical mechanics. The density matrix (commonly designated by ρ) is an operator acting on the Hilbert space of the system in question. For the special case of a pure state, it is given by the projection operator of this state. For a mixed state, where the system is in the quantum-mechanical state $|\psi_j \rang$ with probability pj, the density matrix is the sum of the projectors, weighted with the appropriate probabilities (see bra-ket notation): $$\rho = \sum_j p_j |\psi_j \rang \lang \psi_j|$$ The density matrix is used to calculate the expectation value of any operator A of the system, averaged over the different states $|\psi_j \rang$. This is done by taking the trace of the product of ρ and A: $$\operatorname{tr}[\rho A]=\sum_j p_j \lang \psi_j|A|\psi_j \rang$$ The probabilities pj are nonnegative and normalized (i.e. their sum gives one). For the density matrix, this means that ρ is a positive semidefinite hermitian operator (its eigenvalues are nonnegative) and the trace of ρ (the sum of its eigenvalues) is equal to one. C*-algebraic formulation of density states It is now generally accepted that the description of quantum mechanics in which all self-adjoint operators represent observables is untenable. For this reason, observables are identified to elements of an abstract C*-algebra A (that is one without a distinguished representation as an algebra of operators) and states are positive linear functionals on A. In this formalism, pure states are extreme points of the set of states. Note that using the GNS construction, we can recover Hilbert spaces which realize A as an algebra of operators.
I have a data set $x_{1}, x_{2}, \ldots, x_{k}$ and want to find the parameter $m$ such that it minimizes the sum $$\sum_{i=1}^{k}\big|m-x_i\big|.$$ that is $$\min_{m}\sum_{i=1}^{k}\big|m-x_i\big|.$$ Computational Science Stack Exchange is a question and answer site for scientists using computers to solve scientific problems. It only takes a minute to sign up.Sign up to join this community Probably you ask for a proof that the median solves the problem? Well, this can be done like this: The objective is piecewise linear and hence differentiable except for the points $m=x_i$. What is the slope of the objective is some point $m\neq x_i$? Well, the slope is the sum of the slopes of the mappings $m\mapsto |m-x_j|$ and this is either $+1$ (for $m>x_j$) or $-1$ (for $m<x_j$ ). Hence, the slope indicates how many $x_i$'s are smaller than $m$. You see that the slope is zero if there are equally many $x_i$'s smaller and larger than $m$ (for and even number of $x_i$'s). If there is an odd number of $x_i$'s then the slope is $-1$ left of the "middlest" one and $+1$ right of it, hence the middlest one is the minimum. A generalization of this problem to multiple dimensions is called the geometric median problem. As David points out, the median is the solution for the 1-D case; there, you could use median-finding selection algorithms, which are more efficient than sorting. Sorts are $O(n\log n)$ whereas selection algorithms are $O(n)$; sorts are only more efficient if multiple selections are needed, in which case you could sort (expensively) once, and then repeatedly select from the sorted list. The link to the geometric median problem mentions solutions for multidimensional cases. The explicit solution in terms of the median is correct, but in response to a comment by mayenew, here's another approach. It is well-known that $\ell^1$ minimization problems generally, and the posted problem in particular, can be solved by linear programming. The following LP formulation will do for the given exercise with unknowns $z_i,m$: $$ min \sum z_i $$ such that: $$ z_i \ge m - x_i $$ $$ z_i \ge x_i - m $$ Clearly $z_i$ must equal $|x_i - m|$ at the minimum, so this asks the sum of absolute values of errors to be minimized. The over-powered convex analysis way to show this is just take subgradients. In fact this is equivalent to the reasoning used in some of the other answers involving slopes. The optimization problem is convex (because the objective is convex and there are no constraints.) Also, the subgradient of $\left|m-x_i\right|$ is -1 if $m<x_i$ [-1,1] if $m=x_i$ +1 if $m>x_i$. Since a convex function is minimized if and only if it's subgradient contains zero, and the subgradient of a sum of convex functions is the (set) sum of the subgradients, you get that 0 is in the subgradient if and only if $m$ is the median of $x_1,\ldots x_k$. We're basically after: $$ \arg \min_{m} \sum_{i = 1}^{N} \left| m - {x}_{i} \right| $$ One should notice that $ \frac{\mathrm{d} \left | x \right | }{\mathrm{d} x} = \operatorname{sign} \left( x \right) $ (Being more rigorous would say it is a Sub Gradient of the non smooth $ {L}_{1} $ Norm function). Hence, deriving the sum above yields $ \sum_{i = 1}^{N} \operatorname{sign} \left( m - {x}_{i} \right) $. This equals to zero only when the number of positive items equals the number of negative which happens when $ m = \operatorname{median} \left\{ {x}_{1}, {x}_{2}, \cdots, {x}_{N} \right\} $. One should notice that the median of a discrete group is not uniquely defined. Moreover, it is not necessarily an item within the group.
My knowledge of finite difference is very basic so this could be very trivial. I've seen how multidimensional finite difference works for say fluid equations, but they are also dealing with a single variable. How would one solve a coupled multidimensional equation, for example of the form $$a \nabla a = \nabla b,$$ in two dimensions? Meaning we have the system of equations $$a \, \partial_x a = \partial_x b,$$ $$a \partial_y a = \partial_y b.$$ Using a simplistic finite difference method I can solve each equation independently $$a(x+\Delta x, y) = \frac{b(x+\Delta x,y) - b(x,y)}{a(x,y)} + a(x,y),$$ and $$a(x, y+\Delta y) = \frac{b(x, y+\Delta y) - b(x,y)}{a(x,y)} + a(x,y).$$ However if I want to get to $a(x+\Delta x, y+\Delta y)$, there are two options either first moving along $x$ and then $y$ or vice versa. In this case I am not guaranteed to get the same result, meaning the answer is path dependent which is not desirable. What is the common practice to deal with this? Edit: I'll compute the values over a square grid with initial conditions $a(0,0) = a_0$ and $b(x,y)=x-y$. My purpose is to solve this equation so that I may initialize the grid in $a$. This will be implemented in a program I am writing. The equation given is just a no-frills re-expression of what I'm attempting to do. My true equation is solving an isentropic hydrostatic atmosphere with a complicated potential, $\nabla \Phi$, for the density structure, $\rho$. The equation reduces to $$\gamma \, K \, \rho^{\gamma-2} \, \vec{\nabla} \rho = - \vec{\nabla} \Phi$$, where $K$ and $\gamma$ are some constants.
Here is the problem I'm working on (Satan's apple with bounded memory decision problem): Suppose I offer you slices of an apple, first $1/2$, then $1/4$, ... then $1/2^k$, etc. You want to eat as much of the apple as possible without eating the whole thing. For simplicity, lets have the utility $U(A_n) = n$ where $A_n$ is eating n slices, and $U(A_\infty) = -1$. Now suppose you can't remember the decision you made $k$ decisions back. What is the optimal strategy (only deterministic strategies are allowed)? For $k = 0$, either you always pick a slice, or you don't pick one, so there are two strategies. The first gives $-1$ utility, the second gives $0$, so don't take. For $k > 0$, I formalize the problem as such. Let $S:\mathbb N \to \mathbf 2$ be your sequence of moves, with $S(n) = 0$ if you don't pick the nth slice and $1$ if you do, and $S_i:\mathbf k \to \mathbf 2$ be the $k$-length substring starting at $i$. For simplicity (and I don't think it turns out to matter), let $S(i) = 0$ for $i < k$, and $S(k) = 1$. This represents $k$ being the first time you take a slice and removes the difficulty of dealing with a third option of not having been offered a slice. The goal now becomes maximizing $U(S) = \sum_i S(i)$ while still having the sum converge. For $k=1$, we have $1\bar 0$, $U(S) = 1$ being the optimal solution $k = 2$, $S_1 = 01$ $S_2 = 11$ $S_3 = 10$ $S_4 = 00$ S = $011\bar 0$, $U(S) = 2$. Notice how all strings of length 2 show up. $S_1 = 001$ $S_2 = 011$ $S_3 = 111$ $S_4 = 110$ $S_1 = 101$ $S_2 = 010$ $S_3 = 100$ $S_4 = 000$ $k = 3$, $0011101\bar 0$, $U(S) = 4$. Again, every string of length 3 shows up. $k = 4$, $00011101100101\bar 0$, $U(S) = 8$, and all strings of length 4 show up. If a string of length $k$ comes up more than once, other than $00\dots$, then we will get a cycle of strings containing a $1$ and $U(S)$ will not converge, so I know the above solutions are optimal. Questions: Do these sequences have a name? Do they exist for all $n$? Will they always contain $2^{n-1}$ $1$s? How do I specify them or give some closed form for them? Is my conjecture that it doesn't matter if we start with $000\dots 1$ instead of $---\dots 1$ (where $-$ is no slice is offered, or a blank tape entry) true? This would mean that you can distinguish the first $k$ offerings from the rest by by remembering you've only been offered a slice $i < k$ times. It's strictly more information, but I couldn't do anything with it for $k = 2, 3$.
Let $x_{1}=0,x_{2}=1$ and for $n\geq3,$ define $x_{n}=\frac{x_{n-1}+x_{n-2}}{2}.$ Which of the following is/are true? $1.\{x_{n}\}$ is a monotone sequence. $2. \lim_{n\to\infty} x_{n}=\frac{1}{2}.$ $3.\{x_{n}\}$ is a cauchy sequence. $4.\lim_{n\to\infty} x_{n}=\frac{2}{3}.$ From first three terms it is clear that the sequence is not a monotone sequence and limit can not be $\frac{1}{2}$ if it is convergent. How to prove that the sequence is convergent and its limit is $\frac{2}{3}$? Thanks a lot.
That was an excellent post and qualifies as a treasure to be found on this site! wtf wrote: When infinities arise in physics equations, it doesn't mean there's a physical infinity. It means that our physics has broken down. Our equations don't apply. I totally get that . In fact even our friend Max gets that.http://blogs.discovermagazine.com/crux/ ... g-physics/ Thanks for the link and I would have showcased it all on its own had I seen it first The point I am making is something different. I am pointing out that: All of our modern theories of physics rely ultimately on highly abstract infinitary mathematics That doesn't mean that they necessarily do; only that so far, that's how the history has worked out. I see what you mean, but as Max pointed out when describing air as seeming continuous while actually being discrete, it's easier to model a continuum than a bazillion molecules, each with functional probabilistic movements of their own. Essentially, it's taking an average and it turns out that it's pretty accurate. But what I was saying previously is that we work with the presumed ramifications of infinity, "as if" this or that were infinite, without actually ever using infinity itself. For instance, y = 1/x as x approaches infinity, then y approaches 0, but we don't actually USE infinity in any calculations, but we extrapolate. There is at the moment no credible alternative. There are attempts to build physics on constructive foundations (there are infinite objects but they can be constructed by algorithms). But not finitary principles, because to do physics you need the real numbers; and to construct the real numbers we need infinite sets. Hilbert pointed out there is a difference between boundless and infinite. For instance space is boundless as far as we can tell, but it isn't infinite in size and never will be until eternity arrives. Why can't we use the boundless assumption instead of full-blown infinity? 1) The rigorization of Newton's calculus culminated with infinitary set theory. Newton discovered his theory of gravity using calculus, which he invented for that purpose. I didn't know he developed calculus specifically to investigate gravity. Cool! It does make sense now that you mention it. However, it's well-known that Newton's formulation of calculus made no logical sense at all. If \(\Delta y\) and \(\Delta x\) are nonzero, then \(\frac{\Delta y}{\Delta x}\) isn't the derivative. And if they're both zero, then the expression makes no mathematical sense! But if we pretend that it does, then we can write down a simple law that explains apples falling to earth and the planets endlessly falling around the sun. I'm going to need some help with this one. If dx = 0, then it contains no information about the change in x, so how can anything result from it? I've always taken dx to mean a differential that is smaller than can be discerned, but still able to convey information. It seems to me that calculus couldn't work if it were based on division by zero, and that if it works, it must not be. What is it I am failing to see? I mean, it's not an issue of 0/0 making no mathematical sense, it's a philosophical issue of the nonexistence of significance because there is nothing in zero to be significant. 2) Einstein's gneral relativity uses Riemann's differential geometry. In the 1840's Bernhard Riemann developed a general theory of surfaces that could be Euclidean or very far from Euclidean. As long as they were "locally" Euclidean. Like spheres, and torii, and far weirder non-visualizable shapes. Riemann showed how to do calculus on those surfaces. 60 years later, Einstein had these crazy ideas about the nature of the universe, and the mathematician Minkowski saw that Einstein's ideas made the most mathematical sense in Riemann's framework. This is all abstract infinitary mathematics. Isn't this the same problem as previous? dx=0? 3) Fourier series link the physics of heat to the physics of the Internet; via infinite trigonometric series. In 1807 Joseph Fourier analyzed the mathematics of the distribution of heat through an iron bar. He discovered that any continuous function can be expressed as an infinite trigonometric series, which looks like this: $$f(x) = \sum_{n=0}^\infty a_n \cos(nx) + \sum_{n=1}^\infty b_n \sin(nx)$$ I only posted that because if you managed to survive high school trigonometry, it's not that hard to unpack. You're composing any motion into a sum of periodic sine and cosine waves, one wave for each whole number frequency. And this is an infinite series of real numbers, which we cannot make sense of without using infinitary math. I can't make sense of it WITH infinitary math lol! What's the cosine of infinity? What's the infnite-th 'a'? 4) Quantum theory is functional analysis . If you took linear algebra, then functional analysis can be thought of as infinite-dimensional linear algebra combined with calculus. Functional analysis studies spaces whose points are actually functions; so you can apply geometric ideas like length and angle to wild collections of functions. In that sense functional analysis actually generalizes Fourier series. Quantum mechanics is expressed in the mathematical framework of functional analysis. QM takes place in an infinite-dimensional Hilbert space. To explain Hilbert space requires a deep dive into modern infinitary math. In particular, Hilbert space is complete , meaning that it has no holes in it. It's like the real numbers and not like the rational numbers. QM rests on the mathematics of uncountable sets, in an essential way. Well, thanks to Hilbert, I've already conceded that the boundless is not the same as the infinite and if it were true that QM required infinity, then no machine nor human mind could model it. It simply must be true that open-ended finites are actually employed and underpin QM rather than true infinite spaces. Like Max said, "Not only do we lack evidence for the infinite but we don’t need the infinite to do physics. Our best computer simulations, accurately describing everything from the formation of galaxies to tomorrow’s weather to the masses of elementary particles, use only finite computer resources by treating everything as finite. So if we can do without infinity to figure out what happens next, surely nature can, too—in a way that’s more deep and elegant than the hacks we use for our computer simulations." We can *claim* physics is based on infinity, but I think it's more accurate to say *pretend* or *fool ourselves* into thinking such. Max continued with, "Our challenge as physicists is to discover this elegant way and the infinity-free equations describing it—the true laws of physics. To start this search in earnest, we need to question infinity. I’m betting that we also need to let go of it." He said, "let go of it" like we're clinging to it for some reason external to what is true. I think the reason is to be rid of god, but that's my personal opinion. Because if we can't have infinite time, then there must be a creator and yada yada. So if we cling to infinity, then we don't need the creator. Hence why Craig quotes Hilbert because his first order of business is to dispel infinity and substitute god. I applaud your effort, I really do, and I've learned a lot of history because of it, but I still cannot concede that infinity underpins anything and I'd be lying if I said I could see it. I'm not being stubborn and feel like I'm walking on eggshells being as amicable and conciliatory as possible in trying not to offend and I'm certainly ready to say "Ooooohhh... I see now", but I just don't see it. ps -- There's our buddy Hilbert again. He did many great things. William Lane Craig misuses and abuses Hilbert's popularized example of the infinite hotel to make disingenuous points about theology and in particular to argue for the existence of God. That's what I've got against Craig. Craig is no friend of mine and I was simply listening to a debate on youtube (I often let youtube autoplay like a radio) when I heard him quote Hilbert, so I dug into it and posted what I found. I'm not endorsing Craig lol 5) Cantor was led to set theory from Fourier series. In every online overview of Georg Cantor's magnificent creation of set theory, nobody ever mentions how he came upon his ideas. It's as if he woke up one day and decided to revolutionize the foundations of math and piss off his teacher and mentor Kronecker. Nothing could be further from the truth. Cantor was in fact studing Fourier's trigonometric series! One of the questions of that era was whether a given function could have more than one distinct Fourier series. To investigate this problem, Cantor had to consider the various types of sets of points on which two series could agree; or equivalently, the various sets of points on which a trigonometric series could be zero. He was thereby led to the problem of classifying various infinite sets of real numbers; and that led him to the discovery of transfinite ordinal and cardinal numbers. (Ordinals are about order in the same way that cardinals are about quantity). I still can't understand how one infinity can be bigger than another since, to be so, the smaller infinity would need to have limits which would then make it not infinity. In other words, and this is a fact that you probably will not find stated as clearly as I'm stating it here: If you begin by studying the flow of heat through an iron rod; you will inexorably discover transfinite set theory. Right, because of what Max said about the continuum model vs the actual discrete. Heat flow is actually IR light flow which is radiation from one molecule to another: a charged particle vibrates and vibrations include accelerations which cause EM radiation that emanates out in all directions; then the EM wave encounters another charged particle which causes vibration and the cycle continues until all the energy is radiated out. It's a discrete process from molecule to molecule, but is modeled as continuous for simplicity's sake. I've long taken issue with the 3 modes of heat transmission (conduction, convention, radiation) because there is only radiation. Atoms do not touch, so they can't conduct, but the van der waals force simply transfers the vibrations more quickly when atoms are sufficiently close. Convection is simply vibrating atoms in linear motion that are radiating IR light. I have many issues with physics and have often described it as more of an art than a science (hence why it's so difficult). I mean, there are pages and pages on the internet devoted to simply trying to define heat.https://www.quora.com/What-is-heat-1https://www.quora.com/What-is-meant-by-heathttps://www.quora.com/What-is-heat-in-physicshttps://www.quora.com/What-is-the-definition-of-heathttps://www.quora.com/What-distinguishes-work-and-heat Physics is a mess. What gamma rays are, depends who you ask. They could be high-frequency light or any radiation of any frequency that originated from a nucleus. But I'm digressing.... I do not know what that means in the ultimate scheme of things. But I submit that even the most ardent finitist must at least give consideration to this historical reality. It just means we're using averages rather than discrete actualities and it's close enough. I hope I've been able to explain why I completely agree with your point that infinities in physical equations don't imply the actual existence of infinities. Yet at the same time, I am pointing out that our best THEORIES of physics are invariably founded on highly infinitary math. As to what that means ... for my own part, I can't help but feel that mathematical infinity is telling us something about the world. We just don't know yet what that is. I think it means there are really no separate things and when an aspect of the universe attempts to inspect itself in order to find its fundamentals or universal truths, it will find infinity like a camera looking at its own monitor. Infinity is evidence of the continuity of the singular universe rather than an existing truly boundless thing. Infinity simply means you're looking at yourself. Anyway, great post! Please don't be mad. Everyone here values your presence and are intimidated by your obvious mathematical prowess Don't take my pushback too seriously I'd prefer if we could collaborate as colleagues rather than competing.
Let $f:G\rightarrow H$ be a group homomorphism such that $f_* :G_{ab}\rightarrow H_{ab}$ is an isomorphism and that $f_* : H_2(G)\rightarrow H_2(H)$ is an epimorphism. Question is to prove that this induced a monomorphism $$f:\frac{G}{\bigcap_{n=0}^{\infty}G_n}\rightarrow \frac{H}{\bigcap_{n=0}^{\infty}H_n}$$ Define the abelianization of a group $G$ to be the quotient group $G_{ab} := G/[G,G]$, where $[G,G]$ is the commutator subgroup. $G_n$ is the lower central series defined inductively as $G_1=[G,G]; G_{n+1}=[G,G_n]$ I see that $f(G_n)\subset H_n$ so we have induced map $f: G/G_n\rightarrow H/H_n$ and for similar reasons we have well defined map $$f:\frac{G}{\bigcap_{n=1}^{\infty}G_n}\rightarrow \frac{H}{\bigcap_{n=1}^{\infty}H_n}$$ Now the question is how do you show that this is a monomorphism.. Let $[x]\in \frac{G}{\bigcap_{n=1}^{\infty}G_n}$ such that $[f(x)]=0\in \frac{H}{\bigcap_{n=1}^{\infty}H_n}$ i.e., $f(x)\in \bigcap_{n=1}^{\infty}H_n$ As $f(x)\in \bigcap_{n=1}^{\infty}H_n$ we have in particular $f(x)\in H_1$. As $f_* : G/[G,G]\rightarrow H/[H,H]$ is an isomorphism and $f(x)\in [H,H]$ we have $x\in [G,G]$ So, there is some hope.. If $f(x)\in H_1 $ we have $x\in G_1$... As $f(x)\in H_2=[H,H_1]$.. Taking for granted all better possibilities we have $f(x)=h(aba^{-1}b^{-1})h^{-1}(aba^{-1}b^{-1})^{-1}$... I am not sure where to go from this... I am not sure how to use other details of that isomorphism to conclude $x\in G_n$ for all $n\in \mathbb{N}$ Please suggest something...
I've been trying to prove that $\frac{b-a}{1+b}<\ln(\frac{1+b}{1+a})<\frac{b-a}{1+a}$ using the Mean value theorem. What I've tried is setting $f(x)=\ln x$ and using the Mean value theorem on the interval $[1,\frac{1+b}{1+a}]$. I managed to prove that $\ln(\frac{1+b}{1+a})<\frac{b-a}{1+a}$ but not the other part, only that $\frac{1+a}{1+b}<\ln(\frac{1+b}{1+a})$. any help? p.s: sorry if I have some mistakes in my terminology or so on, I'm not totally fluent in english. You apply the mean value theorem to the function $f : x \mapsto \ln (1+x)$ on the interval $[a,b]$. Note that $f$ fulfills the hypothesis of the theorem : being continuous on $[a,b]$ and differentiable on $]a,b[$. On this interval the function $f'$ is bounded below by $\frac{1}{1+b}$ and above by $\frac{1}{1+a}$, so that $$\frac{1}{1+b} \leq \frac{f(b)-f(a)}{b-a} \leq \frac{1}{1+a}.$$ Replacing $f$ with its explicit definition and $f'$ by what I let you calculate, and "multiplying the inequalities" by $b-a$ gives you the wanted result. Suppose you know $\displaystyle \ln\left(\frac{1+b}{1+a}\right)<\frac{b-a}{1+a}$. If you rename $a$ and $b$ so that the number that was called $b$ is now called $a$ and vice-versa, then this says: $$ \ln\left(\frac{1+a}{1+b}\right)<\frac{a-b}{1+b}. $$ Multiplying both sides by $-1$ necessitates changing $\text{“}{<}\text{''}$ to $\text{“}{>}\text{''}$ and we get $$ -\ln\left(\frac{1+a}{1+b}\right)>\frac{b-a}{1+b}. $$ But $-\log\dfrac p q = \log\dfrac q p$, so that is the same as $$ \ln\left(\frac{1+b}{1+a}\right)>\frac{b-a}{1+b}. $$ We begin from the integral definition of the logarithm function $$\log x=\int_1^x \frac{1}{u}\,du\tag 1$$ for $x>0$. Note that $1/x$ is monotonically decreasing on $[1,x]$ for $x\ge 1$ and monotonically increasing on $[x,1]$ for $x\le 1$. Then, from $(1)$ and the Mean Value Theorem for Integrals, along with the Intermediate Value Theorem, we obtain the inequalities $$\frac{x-1}{x}\le \log x\le x-1 \tag 2$$ Letting $x=(1+b)/(1+a)$ in $(2)$ reveals $$\frac{b-a}{1+b}\le \log \left(\frac{1+b}{1+a}\right)\le \frac{b-a}{1+a}$$ thereby establishing the coveted inequality!
Good morning.I need to create a pdf and estimate its parameters by the MLE method. The distribution is a power law ($f(x)=C(k)\;x^{-k}$) and I need to estimate the $k$ parameter and $C(k)$ is determinated by the normalization condition $\int_a^b\;C(k)\;x^{-k}=1$.I have tried to define the power distribution with ProbabilityDistribution function using, for example, C=5, but when I want to plot it, Mathematica doesn't display the plot.What I've been writing is distr=ProbabilityDistribution[5/x^1.05,{x,0,Infinity}]PDF[distr,x]Plot[%,{x,0,1}] NOTE: when I try to define the distribution function I'm setting $k=1.05$, just for knowing if I'm doing it well. Could you help me, please? Thanks for your answers. Thanks for your help. I just want to understand something. You told me that I can aproximate my power law function as a Pareto Distribution. When I search for Pareto Distribution I found that Distribution very different from my power law, but I found too that Mathematica uses the form $$f(x)=\left(\frac{x}{k}\right)^{-\alpha}$$ when $k$ is the constant. If I rewrite this Pareto Distribution form in order to compare with the power law that I'm using $f(x)=k(\alpha)x^{-\alpha}$, I supose that it would be $$f(x)=k^{\alpha}x^{-\alpha}$$ Am I right? Now, Bob Hanlon defined that Distribution with $a$ instead of $k$ and $k-1$ instead of $\alpha$. I mean, $$f(x)=\left(\frac{x}{a}\right)^{-(k-1)}=a^{k-1}x^{-(k-1)}$$ Am I right? What I mean is, the Pareto Distribution is my power law function if I take $k$ (or $a$, according to Bob) as my $k(\alpha)$ (or $a(k-1)$) and $\alpha$ (or $k-1$) as my exponent? If I do that, is the Pareto Distribution calculated by Mathematica tha same power law function tha I need? Thank for make me this most clear.
I read this pdf on non inertial frame, in particular I have a question on the deviation of free falling object due to Coriolis effect. Consider a ball let go from a tower at height $h$. The displacement due to Coriolis effect, calculated with formulas in Earth system, is $(4.19)$, after it there is explanation of the effect that uses the conservation of the angular momentum of the ball in a inertial frame. $$x =\frac{2\sqrt{2}ωh^{3/2}}{3g^{1/2}} \tag{4.19}$$ Just before being dropped, the particle is at radius $(R+h)$ and co-rotating, so it has speed $(R+h)ω$ and angular momentum per unit mass $(R+h)^2ω$. As it falls, its angular momentum is conserved (the only force is central), so its final speed v in the (Eastward) direction of rotation satisfies $Rv = (R+h)^2ω$, and $v= (R+h)^2ω/R$. Since this is larger than the speed $Rω$ of the foot of the tower, the particle gets ahead of the tower. The horizontal velocity relative to the tower is approximately $2hω$ (ignoring the $h^2$ term), so the average relative speed over the fall is about $hω$. We now see that the displacement $(4.19)$ can be expressed in the form (time of flight) times (average relative velocity) as might be expected. But $$v_{average} t_{flight}=h \omega \sqrt{\frac{2h}{g}}$$ Which differs by $\frac{2}{3}$ from $(4.19)$. Is that due to the approximation made? I also don't understand completely why the average relative velocity $v_{average}$ is taken to be half the relative velocity found. Isn't this valid only for constant accelerated linear motions?
The key here is that $X = Y\oplus Y^\perp$, i.e. for any $x\in X$ there are unique $y\in Y, z\in Y^\perp$ such that $x = y+z$. This is essential in order for $P(x) = P(y+z) = y$ to be well defined in the first place. Now, what you showed is that $\alpha x_1 +\beta x_2$ can be uniquely written as $(\alpha y_1 + \beta y_2) + (\alpha z_1 + \beta z_2)$ where $\alpha y_1 + \beta y_2\in Y$, $\alpha z_1 + \beta z_2\in Y^\perp$. So, what is $P(\alpha x_1 + \beta x_2)$? Edit: Your definition of orthogonal projection assumes that for each $x\in X$ there is unique $y\in Y$ such that $x-y\in Y^\perp$. This is actually equivalent to stating that $Y\oplus Y^\perp = X$, i.e. there are unique $y\in Y$, $z\in Y^\perp$ such that $x = y+z$ (notice that $z = x-y$ from your definition). So, assume that there is unique $y\in Y$ such that $x-y\in Y^\perp$. Then, $P(x) = P(y+(x-y)) = y$ is just restating my claim if you substitute $z = x-y$. The bigger question is why such $y$ exists. In finite-dimensional case this follows immediately from existence of orthonormal basis for $Y$. Then you can define $y = \sum \langle x,e_i\rangle e_i$. In infinite-dimensional case we can use projection theorem for Hilbert spaces when we can find such $y \in Y$ that minimizes length $\|x-y\|$ (think of a point and a line: minimum distance between point and line is given by orthogonal projection). I hope this clarifies things a bit.
I am interested in solving ODE systems of the form \begin{align} \frac{\partial \vec{u}}{\partial t} = F(\vec{u}) \end{align} where $F$ is a nonlinear operator, $\vec{u}$ is a vector valued function of $t$, and we have initial conditions $\vec{u}(0) = \vec{u}_0$. I am specifically interested in equations of this form that come from (finite element) spatial discretizations of initial-boundary value problems involving systems of PDEs. An accurate way of doing this would be to further discretize $F(\vec{u})$ and take small time steps with high order, but this may be too costly for the application. Instead, one may perform the ``operator split'' \begin{align} F(\vec{u}) = A(\vec{u}) + B(\vec{u}), \end{align} and solve the following two subproblems in an alternating fashion for a single time step $t \mapsto t + \Delta t$: \begin{align} \begin{cases} \frac{\partial \vec{v}}{\partial t} = A(\vec{v}), \\ \frac{\partial \vec{w}}{\partial t} = B(\vec{w}). \end{cases} \end{align} To start, the initial condition $\vec{u_0}$ is used to solve the first equation for $\vec{v_1}$, and then $\vec{v_1}$ is used as a condition to solve the second eqution for $\vec{w_1}$. We then set $\vec{u_1} := \vec{w_1}$, and then repeat this process to obtain $\vec{u_2}, \vec{u_3}, ... $ at each time step. This operator splitting method is well known to be first-order accurate, i.e., the operator error grows like $\mathcal{O}(\Delta t)$. The advantage here is the subproblems should somehow be easier to solve. I am somewhat familiar with CFL conditions for explicit time-stepping schemes. They basically tell you how small you need to make your time step, for a specific problem, relative to your mesh size, and are generally only applicable for simple (linear) problems. For implicit time-stepping schemes applied to simple (linear) problems, one can expect unconditional stability, allowing for large time steps to be used. My questions are the following: When solving the two subproblems $\vec{v}_t = A\vec{v}$ and $\vec{w}_t = B \vec{w}$ in an alternating fashion, is one required to use the same time-integration scheme for both subproblems? For example, would I be able solve the first equation for time $v_{t_{n+1}}$ using a first-order implicit method, and then the second equation for time $w_{t_{n+1}}$ using a second-order explicit method? Do the time-step sizes for the two subproblems need to be the same? Or could one, for example, compute $v_{t_{n+1}}$ using the entire $\Delta t$ time step, and compute $w_{t_{n+1}}$ using four smaller $\frac{\Delta t}{4}$ time steps? Isn't this what higher order operator splitting basically is? Also, is this how you would ensure minimum step-size stability if one of the subproblems was "stiff"? If the two subproblems are nonlinear, how does one determine a good timestep size for each problem to guarantee stability? Is there any known relationship between the timestep size needed to guarantee stability of the original problem and timestep sizes needed to guarantee stability for the two subproblems? Should I probably be using second-order and higher operator splitting schemes, and if so, how do I decide which one to use (e.g., from this list http://www.asc.tuwien.ac.at/~winfried/splitting/index.php?rc=0&ab=strang-ab&name=Strang)? Does it matter which subproblem you solve first? Partial answers or references to literature are appreciated.
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
Search Now showing items 1-10 of 32 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
We consider the problem of distributed estimation of a Gaussian vector withlinear observation model. Each sensor makes a scalar noisy observation of theunknown vector, quantizes its observation, maps it to a digitally modulatedsymbol, and transmits the symbol over orthogonal power-constrained fadingchannels to a fusion center (FC). The FC is tasked with fusing the receivedsignals from sensors and estimating the unknown vector. We derive the BayesianFisher Information Matrix (FIM) for three types of receivers: (i) coherentreceiver (ii) noncoherent receiver with known channel envelopes (iii)noncoherent receiver with known channel statistics only. We also derive theWeiss-Weinstein bound (WWB). We formulate two constrained optimizationproblems, namely maximizing trace and log-determinant of Bayesian FIM undernetwork transmit power constraint, with sensors transmit powers being theoptimization variables. We show that for coherent receiver, these problems areconcave. However, for noncoherent receivers, they are not necessarily concave.The solution to the trace of Bayesian FIM maximization problem can beimplemented in a distributed fashion. We numerically investigate how theFIM-max power allocation across sensors depends on the sensors observationqualities and physical layer parameters as well as the network transmit powerconstraint. Moreover, we evaluate the system performance in terms of MSE usingthe solutions of FIM-max schemes, and compare it with the solution obtainedfrom minimizing the MSE of the LMMSE estimator (MSE-min scheme), and that ofuniform power allocation. These comparisons illustrate that, although the WWBis tighter than the inverse of Bayesian FIM, it is still suitable to useFIM-max schemes, since the performance loss in terms of the MSE of the LMMSEestimator is not significant. A new class of formal latent-variable stochastic processes called hiddenquantum models (HQM's) is defined in order to clarify the theoreticalfoundations of ion channel signal processing. HQM's are based on quantumstochastic processes which formalize time-dependent observation. They allow thecalculation of autocovariance functions which are essential forfrequency-domain signal processing. HQM's based on a particular type ofobservation protocol called independent activated measurements are shown to tobe distributionally equivalent to hidden Markov models yet without anunderlying physical Markov process. Since the formal Markov processes arenon-physical, the theory of activated measurement allows merging energy-basedEyring rate theories of ion channel behavior with the more commonphenomenological Markov kinetic schemes to form energy-modulated quantumchannels. Using the simplest quantum channel model consistent with neuronalmembrane voltage-clamp experiments, activation eigenenergies are calculated forthe Hodgkin-Huxley K+ and Na+ ion channels. It is also shown that maximizingentropy under constrained activation energy yields noise spectral densitiesapproximating $S(f) \sim 1/f^\alpha$, thus offering a biophysical explanationfor the ubiquitous $1/f$-type in neurological signals. In this paper we provide a set of stability conditions for lineartime-varying networked control systems with arbitrary topologies using apiecewise quadratic switching stabilization approach with multiple quadraticLyapunov functions. We use this set of stability conditions to provide a noveliterative low-complexity algorithm that must be updated and optimized indiscrete time for the design of a sparse observer-controller network, for agiven plant network with an arbitrary topology. We employ distributed observersby utilizing the output of other subsystems to improve the stability of eachobserver. To avoid unbounded growth of controller and observer gains, we imposebounds on the norms of the gains. A repetitive visual stimulus induces a brain response known as the SteadyState Visual Evoked Potential (SSVEP) whose frequency matches that of thestimulus. Reliable SSVEP-based Brain-Computer-Interfacing (BCI) is premised inpart on the ability to efficiently detect and classify the true underlyingfrequencies in real time. We pose the problem of detecting differentfrequencies corresponding to different stimuli as a composite multi-hypothesistest, where measurements from multiple electrodes are assumed to admit a sparserepresentation in a Ramanujan Periodicity Transform (RPT) dictionary. Wedevelop an RPT detector based on a generalized likelihood ratio test of theunderlying periodicity that accounts for the spatial correlation between theelectrodes. Unlike the existing supervised methods which are highlydata-dependent, the RPT detector only uses data to estimate the per-subjectspatial correlation. The RPT detector is shown to yield promising resultscomparable to state-of-the-art methods such as standard CCA and IT CCA based onexperiments with real data. Its ability to yield high accuracy with shortepochs holds potential to advance real-time BCI technology. We consider distributed estimation of a Gaussian source in a heterogenousbandwidth constrained sensor network, where the source is corrupted byindependent multiplicative and additive observation noises, with incompletestatistical knowledge of the multiplicative noise. For multi-bit quantizers, wederive the closed-form mean-square-error (MSE) expression for the linearminimum MSE (LMMSE) estimator at the FC. For both error-free and erroneouscommunication channels, we propose several rate allocation methods named aslongest root to leaf path, greedy and integer relaxation to (i) minimize theMSE given a network bandwidth constraint, and (ii) minimize the requirednetwork bandwidth given a target MSE. We also derive the Bayesian Cramer-Raolower bound (CRLB) and compare the MSE performance of our proposed methodsagainst the CRLB. Our results corroborate that, for low power multiplicativeobservation noises and adequate network bandwidth, the gaps between the MSE ofour proposed methods and the CRLB are negligible, while the performance ofother methods like individual rate allocation and uniform is not satisfactory. We consider a wireless sensor network (WSN), consisting of several sensorsand a fusion center (FC), which is tasked with solving an M-ary hypothesistesting problem. Sensors make M-ary decisions and transmit their digitallymodulated decisions over orthogonal channels, which are subject to Rayleighfading and noise, to the FC. Adopting Bayesian optimality criterion, weconsider training and non-training based distributed detection systems andinvestigate the effect of imperfect channel state information (CSI) on theoptimal maximum a posteriori probability (MAP) fusion rules and optimal powerallocation between sensors, when the sum of training and data symbol transmitpowers is fixed. We consider J-divergence criteria to do power allocationbetween sensors. The theoretical results show that J-divergence for coherentreception will be maximized if total training power be half of total power,however for non coherent reception, optimal training power which maximizeJ-divergence is zero. The simulated results also show that probability of errorwill be minimized if training power be half of total power for coherentreception and zero for non coherent reception. $Objective$: A characteristic of neurological signal processing is highlevels of noise from sub-cellular ion channels up to whole-brain processes. Inthis paper, we propose a new model of electroencephalogram (EEG) backgroundperiodograms, based on a family of functions which we call generalized van derZiel--McWhorter (GVZM) power spectral densities (PSDs). To the best of ourknowledge, the GVZM PSD function is the only EEG noise model which hasrelatively few parameters, matches recorded EEG PSD's with high accuracy from 0Hz to over 30 Hz, and has approximately $1/f^\theta$ behavior in themid-frequencies without infinities. $Methods$: We validate this model usingthree approaches. First, we show how GVZM PSDs can arise in population of ionchannels in maximum entropy equilibrium. Second, we present a class of mixedautoregressive models, which simulate brain background noise and whoseperiodograms are asymptotic to the GVZM PSD. Third, we present two real-timeestimation algorithms for steady-state visual evoked potential (SSVEP)frequencies, and analyze their performance statistically. $Results$: Inpairwise comparisons, the GVZM-based algorithms showed statisticallysignificant accuracy improvement over two well-known and widely-used SSVEPestimators. $Conclusion$: The GVZM noise model can be a useful and reliabletechnique for EEG signal processing. $Significance$: Understanding EEG noise isessential for EEG-based neurology and applications such as real-timebrain-computer interfaces (BCIs), which must make accurate control decisionsfrom very short data epochs. The GVZM approach represents a successful newparadigm for understanding and managing this neurological noise. Here, we study the ultimately bounded stability of network of mismatchedsystems using Lyapunov direct method. We derive an upper bound on the norm ofthe error of network states from its average states, which it achieves infinite time. Then, we devise a decentralized compensator to asymptotically pinthe network of mismatched systems to a desired trajectory. Next, we designdistributed estimators to compensate for the mismatched parameters performancesof adaptive decentralized and distributed compensations are analyzed. Ouranalytical results are verified by several simulations in a network of globallyconnected Lorenz oscillators. This paper considers the problem of binary distributed detection of a knownsignal in correlated Gaussian sensing noise in a wireless sensor network, wherethe sensors are restricted to use likelihood ratio test (LRT), and communicatewith the fusion center (FC) over bandwidth-constrained channels that aresubject to fading and noise. To mitigate the deteriorating effect of fadingencountered in the conventional parallel fusion architecture, in which thesensors directly communicate with the FC, we propose new fusion architecturesthat enhance the detection performance, via harvesting cooperative gain(so-called decision diversity gain). In particular, we propose: (i) cooperativefusion architecture with Alamouti's space-time coding (STC) scheme at sensors,(ii) cooperative fusion architecture with signal fusion at sensors, and (iii)parallel fusion architecture with local threshold changing at sensors. Forthese schemes, we derive the LRT and majority fusion rules at the FC, andprovide upper bounds on the average error probabilities for homogeneoussensors, subject to uncorrelated Gaussian sensing noise, in terms ofsignal-to-noise ratio (SNR) of communication and sensing channels. Oursimulation results indicate that, when the FC employs the LRT rule, unless forlow communication SNR and moderate/high sensing SNR, performance improvement isfeasible with the new fusion architectures. When the FC utilizes the majorityrule, such improvement is possible, unless for high sensing SNR. Here, we study the ultimately bounded stability of network of mismatchedsystems using Lyapunov direct method. The upper bound on the error ofoscillators from the center of the neighborhood is derived. Then theperformance of an adaptive compensation via decentralized control is analyzed.Finally, the analytical results for a network of globally connected Lorenzoscillators are verified. We consider distributed estimation of a Gaussian vector with a linearobservation model in an inhomogeneous wireless sensor network, where a fusioncenter (FC) reconstructs the unknown vector, using a linear estimator. Sensorsemploy uniform multi-bit quantizers and binary PSK modulation, and communicatewith the FC over orthogonal power- and bandwidth-constrained wireless channels.We study transmit power and quantization rate (measured in bits per sensor)allocation schemes that minimize mean-square error (MSE). In particular, wederive two closed-form upper bounds on the MSE, in terms of the optimizationparameters and propose coupled and decoupled resource allocation schemes thatminimize these bounds. We show that the bounds are good approximations of thesimulated MSE and the performance of the proposed schemes approaches theclairvoyant centralized estimation when total transmit power or bandwidth isvery large. We study how the power and rate allocation are dependent on sensorsobservation qualities and channel gains, as well as total transmit power andbandwidth constraints. Our simulations corroborate our analytical results andillustrate the superior performance of the proposed algorithms.
ISSN: 1534-0392 eISSN: 1553-5258 All Issues Communications on Pure & Applied Analysis November 2012 , Volume 11 , Issue 6 Select all articles Export/Reference: Abstract: The present volume is dedicated to Michel Pierre. An international Workshop for his 60th birthday, entitled "Partial Dierential Equations and Applications", was organized in Vittel (France) from October 22 to October 24, 2009. Abstract: We suggest an approach for proving global existence of bounded solutions and existence of a maximal attractor in $L^\infty$ for a class of abstract $3\times 3$ reaction-diffusion systems. The motivation comes from the concrete example of ``facilitated diffusion'' system with different non-homogeneous boundary conditions modelling the blood oxigenation reaction $Hb+O_2 \rightleftharpoons HbO_2$. The method uses classical tools of linear semigroup theory, the $L^p$ techniques developed by Martin and Pierre [16] and B\'enilan and Labani [6] and the hint of ``preconditioning operators'': roughly speaking, the study of solutions of $(\partial_t +A_i)u=f$ is reduced to the study of solutions to $(\partial_t+B)(B^{-1}u)=B^{-1}f+(I-B^{-1}A_i)u,$ with a conveniently chosen operator $B$. In particular, we need the $L^\infty-L^p$ regularity of $B^{-1}A_i$ and the positivity of the operator $(B^{-1}A_i-I)$ on the domain of $A_i$. The same ideas can be applied to systems of higher dimension. To give an example, we prove the existence of a maximal attractor in $L^\infty$ for the $5\times 5$ system of facilitated diffusion modelling the coupled reactions $Hb+O_2 \rightleftharpoons HbO_2$, $Hb+CO_2 \rightleftharpoons HbCO_2$. Abstract: If $\Omega$ is any compact Lipschitz domain, possibly in a Riemannian manifold, with boundary $\Gamma = \partial \Omega$, the Dirichlet-to-Neumann operator $\mathcal{D}_\lambda$ is defined on $L^2(\Gamma)$ for any real $\lambda$. We prove a close relationship between the eigenvalues of $\mathcal{D}_\lambda$ and those of the Robin Laplacian $\Delta_\mu$, i.e. the Laplacian with Robin boundary conditions $\partial_\nu u =\mu u$. This is used to give another proof of the Friedlander inequalities between Neumann and Dirichlet eigenvalues, $\lambda^N_{k+1} \leq \lambda^D_k$, $k \in N$, and to sharpen the inequality to be strict, whenever $\Omega$ is a Lipschitz domain in $R^d$. We give new counterexamples to these inequalities in the general Riemannian setting. Finally, we prove that the semigroup generated by $-\mathcal{D}_\lambda$, for $\lambda$ sufficiently small or negative, is irreducible. Abstract: Recently, Auscher and Axelsson gave a new approach to non-smooth boundary value problems with $L^2$ data, that relies on some appropriate weighted maximal regularity estimates. As part of the development of the corresponding $L^p$ theory, we prove here the relevant weighted maximal estimates in tent spaces $T^{p, 2}$ for $p$ in a certain open range. We also study the case $p=\infty$. Abstract: We consider the spectrum of the family of one-dimensional self-adjoint operators $-{\mathrm{d}}^2/{\mathrm{d}}t^2+(t-\zeta)^2$, $\zeta\in \mathbb{R}$ on the half-line with Neumann boundary condition. It is well known that the first eigenvalue $\mu(\zeta)$ of this family of harmonic oscillators has a unique minimum when $\zeta\in\mathbb{R}$. This paper is devoted to the accurate computations of this minimum $\Theta_{0}$ and $\Phi(0)$ where $\Phi$ is the associated positive normalized eigenfunction. We propose an algorithm based on finite element method to determine this minimum and we give a sharp estimate of the numerical accuracy. We compare these results with a finite element method. Abstract: We consider reaction-diffusion systems with merely measurable reaction terms to cover the possibility of discontinuities. Solutions of such problems are defined as solutions to appropriate differential inclusions which, in an abstract form, lead to evolution inclusions of the form $u' \in - A u + F(t,u)$ on $[0,T], u(0)=u_{0},$ where $A$ is $m$-accretive and $F$ is of upper semicontinuous type. While such problems, in general, can exhibit non-existence of solutions, the present paper shows that especially for $m$-completely accretive $A$, and under reasonable assumptions on $F$, mild solutions do exist. Abstract: This paper is devoted to the study of the well-posedness and the long time behavior of the Caginalp phase-field model with singular potentials and dynamic boundary conditions. Thanks to a suitable definition of solutions, coinciding with the strong ones under proper assumptions on the bulk and surface potentials, we are able to get dissipative estimates, leading to the existence of the global attractor with finite fractal dimension, as well as of an exponential attractor. Abstract: We consider the annulus $\mathcal{A}_R$ of complex numbers with modulus and inverse of modulus bounded by $R>1$. We present some situations, in which this annulus is a K-spectral set for an operator $A$, and some related estimates. Abstract: We study the limit of a kinetic evolution equation involving a small parameter and perturbed by a smooth random term which also involves the small parameter. Generalizing the classical method of perturbed test functions, we show the convergence to the solution of a stochastic diffusion equation. Abstract: We consider two inverse problems related to the tokamak Tore Suprathrough the study of the magnetostatic equation for the poloidal flux. The first one deals with the Cauchy issue of recovering in a two dimensional annular domain boundary magnetic values on the inner boundary, namely the limiter, from available overdetermined data on the outer boundary. Using tools from complex analysis and properties of genereralized Hardy spaces, we establish stability and existence properties. Secondly the inverse problem of recovering the shape of the plasma is addressed thank tools of shape optimization. Again results about existence and optimality are provided. They give rise to a fast algorithm of identification which is applied to several numerical simulations computing good results either for the classical harmonic case or for the data coming from Tore Supra. Abstract: Several fundamental results on existence and flow-invariance of solutions to the nonlinear nonautonomous partial differential delay equation $ \dot{u}(t) + B(t)u(t) \ni F(t; u_t), 0 \leq s \leq t, u_s = \varphi, $ with $ B(t)\subset X\times X$ $\omega-$accretive, are developed for a general Banach space $X.$ In contrast to existing results, with the history-response $F(t;\cdot)$ globally defined and, at least, Lipschitz on bounded sets, the results are tailored for situations with $F(t;\cdot)$ defined on -- possibly -- thin subsets of the initial-history space $E$ only, and are applied to place several classes of population models in their natural $L^1-$setting. The main result solves the open problem of a subtangential condition for flow-invariance of solutions in the fully nonlinear case, paralleling those known for the cases of (a) no delay, (b) ordinary delay equations with $B(\cdot)\equiv 0,$ and (c) the semilinear case. Abstract: In this paper, we prove an adaptation of the classical compactness Aubin-Simon lemma to sequences of functions obtained through a sequence of discretizations of a parabolic problem. The main difficulty tackled here is to generalize the classical proof to handle the dependency of the norms controlling each function $u^{(n)}$ of the sequence with respect to $n$. This compactness result is then used to prove the convergence of a numerical scheme combining finite volumes and finite elements for the solution of a reduced turbulence problem. Abstract: Following a result of Chill and Jendoubi in the continuous case, we study the asymptotic behavior of sequences $(U^n)_n$ in $R^d$ which satisfy the following backward Euler scheme: $\varepsilon\frac{(U^{n+1}-2U^n+U^{n-1}}{\Delta t^2} +\frac{U^{n+1}-U^n}{\Delta t}+\nabla F(U^{n+1})=G^{n+1}, n\ge 0, $ where $\Delta t>0$ is the time step, $\varepsilon\ge 0$, $(G^{n+1})_n$ is a sequence in $ R^d$ which converges to $0$ in a suitable way, and $F\in C^{1,1}_{l o c}(R^d, R)$ is a function which satisfies a Łojasiewicz inequality. We prove that the above scheme is Lyapunov stable and that any bounded sequence $(U^n)_n$ which complies with it converges to a critical point of $F$ as $n$ tends to $\infty$. We also obtain convergence rates. We assume that $F$ is semiconvex for some constant $c_F\ge 0$ and that $1/\Delta t Abstract: In the present survey paper, basic convergence results for gradient-like systems relying on the Łojasiewicz gradient inequality are recalled in a self-contained way. A uniform version of the gradient inequality is used to get directly convergence and the rate of convergence in one step and a new technical trick, consisting in the evaluation of the integral of the velocity norm from $t$ to $2t$ is introduced. A short idea of the state of the art without technical details is also given. Abstract: We study a spectral problem related to a reaction-diffusion model where preys and predators do not live on the same area. We are interested in the optimal zone where a control should take place. First, we prove existence of an optimal domain in a natural class. Then, it seems plausible that the optimal domain is localized in the intersection of the living areas of the two species. We prove this fact in one dimension for small sized domains. Abstract: We consider in this paper the thermo-diffusive model for flame propagation, which is a reaction-diffusion equation of the KPP (Kolmogorov, Petrovskii, Piskunov) type, posed on an infinite cylinder. Such a model has a family of travelling waves of constant speed, larger than a critical speed $c_*$. The family of all supercritical waves attract a large class of initial data, and we try to understand how. We describe in this paper the fate of an initial datum trapped between two supercritical waves of the same velocity: the solution will converge to a whole set of translates of the same wave, and we identify the convergence dynamics as that of an effective drift, around which an effective diffusion process occurs. Abstract: We propose a new method for analysis of shape optimization problems. The framework of dual dynamic programming is introduced for a solution of the problems. The shape optimization problem for a linear elliptic boundary value problem is formulated in terms of characteristic functions which define the suport of control. The optimal solution of such a problem can be obtained by solving the sufficient optimality conditions. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
Let $S$ be the set of all probability distributions on $\mathbb{R}$ and $S_n=\{p_\theta\}$ be an $n$ dimensional submanifold of parameterized family of probability distributions on $\mathbb{R}$ where $\theta=(\theta_1,\dots, \theta_n)\in \Theta$ are called parameters of the probability density $p_\theta(x)$ and $\Theta$ is an open set homeomorphic to $\mathbb{R}^n$. It is also assumed that $p_\theta(x)>0$ for all $x$. For example, $$p_{\theta=(\mu,\sigma)}(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-(x-\mu)^2}{2\sigma^2}}$$ where $\mu\in \mathbb{R},\sigma>0$ forms a $2$-dimensional submanifold. On this manifold, a Riemannian metric is defined by the following, called Fisher information metric. \begin{eqnarray*} g_{i,j}(\partial_i,\partial_j) &=&\int \partial_i \log p_\theta(x) ~\partial_j \log p_\theta(x) ~p_\theta(x)~ dx\\ &=& \int \partial_i p_\theta(x) ~\partial_j \log p_\theta(x)~ dx \end{eqnarray*} where $\partial_i=\frac{\partial}{\partial \theta_i}$. My question is regarding a point in page 384 from this Ann. Stat. paper by Amari. In the above mentioned paper, when he says a curve $\{q_t(x)\}$ in $S$, where $q_0(x)=p_\theta(x)$ intersects orthogonally $S_n$ at $q_0(x)=p_\theta(x)$, he means $$\int \partial_i \log p_\theta(x) ~\frac{d}{dt} \log q_t(x)\lvert_{t=0} ~q_0(x)~dx=0$$. i.e., $$\int \partial_i p_\theta(x) ~\frac{d}{dt} \log q_t(x)\lvert_{t=0} ~dx=0$$. Why? Could somebody make me understand by giving intuitive explanation of these concepts?
ISSN: 1547-5816 eISSN: 1553-166X All Issues Journal of Industrial & Management Optimization October 2007 , Volume 3 , Issue 4 A special issue dedicated to Professor Changyu Wang in recognition of his contributions Select all articles Export/Reference: Abstract: This special issue is dedicated to Professor Changyu Wang on the occasion of his 70th birthday in recognition of his contributions to Operations Research and its applications and his lasting impact as an educator. Abstract: Clustering method is one of the most important tools in statistics. In a graph theory model, clustering is the process of finding all dense subgraphs. In this paper, a new clustering method is introduced. One of the most significant differences between the new method and other existing methods is that this new method constructs a much smaller hierarchical tree, which clearly highlights meaningful clusters. Another important feature of the new method is the feature of overlapping clustering or multi-membership. The property of multi-membership is a concept that has recently received increased attention in the literature (Palla, Derényi, Farkas and Vicsek, (Nature 2005); Pereira-Leal, Enright and Ouzounis, (Bioinformatics, 2004); Futschik and Carlisle, (J. Bioinformatics and Computational Biology 2005)) Abstract: As a generalization of the traditional path protection (PP) scheme in WDM networks where a backup path is needed for each active path, the partial path protection (PPP) scheme uses a collection of backup paths to protect an active path, where each backup path in the collection protects one or more links on the active path such that every link on the active path is protected by one of the backup paths. While there is no known polynomial time algorithm for computing an active path and a corresponding backup path using the PP scheme for a given source destination node pair, we show that an active path and a corresponding collection of backup paths using the PPP scheme can be computed in polynomial time, whenever they exist, under each of the following four network models: (a) dedicated protection in WDM networks without wavelength converters; (b) shared protection in WDM networks without wavelength converters; (c) dedicated protection in WDM networks with wavelength converters; and (d) shared protection in WDM networks with wavelength converters. This is achieved by proving that that for any given source $s$ and destination $d$ in the network, if one candidate active path connecting $s$ and $d$ is protectable using PPP, then any candidate active path connecting $s$ and $d$ is also protectable using PPP. It is known that the existence of PP implies the existence of PPP while the reverse is not true. We demonstrate a similar result in the case of segmented path protection. This fundamental property of the PPP scheme is of great importance in the context of achieving further research advances in the area of protection and restoration of WDM networks. Abstract: This paper presents a relaxed extragradient-like method for solving the convexly constrained minimization with optimal value zero. The method is a combination of the extragradient-like algorithm and a halfspace-relaxation technique to the constrained set of the problem. Each iteration of the proposed method consists of the projection onto a halfspace containing the given closed convex set. The method is implemented very easily and is proven to be fully convergent to the solution. Preliminary computational experience is also reported. Abstract: In this paper we present a globally and quadratically convergent method for the problem of minimizing a sum of Euclidean norms with linear constraints. The quadratic convergence result of this method is obtained without requiring strict complementarity. Abstract: In this paper, we study Levitin-Polyak type well-posedness for generalized variational inequality problems with functional constraints, as well as an abstract set constraint. We will introduce several types of generalized Levitin-Polyak well-posednesses, and give various criteria and characterizations for these types of well-posednesses. Abstract: In this paper, we firstly propose a technique named Duplicating, which duplicates machines in batch scheduling environment. Then we discuss several applications of Duplicatingin solving batch scheduling problems. For the batch scheduling problem on unrelated parallel machines to minimize makespan, we give a $(4 - \frac{2}{B})$- approximation algorithm and a $(2 - \frac{1}{B} + \epsilon)$ algorithm when $m$ is fixed. We also present a $4(2 - \frac{1}{B} + \epsilon)$-competitive algorithm for the on-line scheduling problem on identical machines to minimize total weighted completion time by another technique-$\rho-dual$, which is proposed originally by Hall et al.(1997). Abstract: For constrained nonconvex optimization, we first show that under second-order sufficient conditions, a class of augmented Lagrangian functions possess local saddle points, and then prove that global saddle points of these augmented Lagrangian functions exist under certain mild additional conditions. Abstract: The Web document is organized by a set of textual data according to a predefined logical structure. It has been shown that collecting Web documents with similar structures can improve query efficiency. The XML document has no vectorial representation, which is required in most existing classification algorithms. The kernel method has been applied to represent structural data with pairwise similarity. In this case, a set of Web data can be fed into classification algorithms in the format of a kernel matrix. However, since the distance between a pair of Web documents is usually obtained approximately, the derived distance matrix is not a kernel matrix. In this paper, we propose to use the nearest correlation matrix (of the estimated distance matrix) as the kernel matrix, which can be fast computed by a Newton-type method. Experimental studies show that the classification accuracy can be significantly improved. Abstract: In this paper, we develop a model to analyze the coordination of a supply chain with the demand influenced by the buyer's promotion. The supply chain consists of a supplier and a group of homogeneous buyers. The buyers choose inventories ex ante and promotional levels ex post. The annual demand rate depends on the promotional level and the operating cost---including the ordering and inventory holding cost---depends on the promotional level and the order quantity. It is shown that quantity discount alone is not sufficient to guarantee joint profit maximization. Then we propose a contract, discount quantity with transfer profit contract, and show that this contract can coordinate the supply chain. Moreover, it is shown that this policy is robust because it can allocate the supply chain profit arbitrarily between the supplier and the buyer and lead the buyer to choose the joint optimal promotional level and the supplier need not to observe and verify the buyer's promotional level. For the special case that the operating cost is a fixed constant, we show that there is no contract which can coordinate the supply chain if the promotion is unobservable and unverifiable and the discount policy can guarantee the coordination of the supply chain if the promotion is observable and verifiable. Abstract: We study a nonlinear complementarity formulation of the supply chain network equilibrium problem, which is parallel to the variational inequality model established by Dong et al. (2004). In this setting, we obtain weaker conditions to guarantee the existence and uniqueness of the equilibrium pattern for a supply chain. A smoothing Newton method that exploits the network structure is proposed and convergence results are presented. Numerical examples show that the algorithm is more applicable than the modified projection method presented by Dong et al. (2004). Abstract: Tumor classification is one of the important applications of microarray technology. In gene expression-based tumor classification systems, gene selection is a main and very important component. In this paper, we propose a new approach for gene selection. With the genes selected in colon cancer data, acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) data using our approach, we apply support vector machines to classify tissues in these two data sets, respectively. The results of classification show that our method is very useful and promising. Abstract: In this paper, we treat the split feasibility problem with uncertain linear operator (USFP). For this problem, we first reformulate it as an uncertain optimization problem (UOP) with zero optimal value, and then we introduce robust counterparts of the UOP and reformulate them as the tractable convex optimization problems. These convex optimization problems have close connection with the robust counterparts of USFP and the minimum SFPs under the appropriate conditions. In the end of this paper, we give some numerical results to illustrate the effectiveness of the robust solutions of the concerned problem. Abstract: Based on the conclusion that have been put forward by Sougata Poddar and Uday Bhanu Sinha (2002), in "The Role of Fixed Fee and Royalty in Patent Licensing", we construct a model to analyze the behavior of the patentee when the licensee have private information about its marginal product costs before licensing. In the paper, the patentee is not considered as an independent R&D institute anymore but an insider. Moreover, we extend the model to Homogeneous Stackelberg and present the patentee's optimal linear licensing schemes in the sense of maximizing its profits under the licensing contract. It is found that the expected profit of the patentee under the separating contract is higher than any pooling contract in this model. Abstract: For the constrained optimization problem, under the condition that the objective function is strongly convex, we obtain a global error bound for the distance between any feasible solution and the optimal solution by using the merit function in the sequential quadratic programming (SQP) method. Readers Authors Editors Referees Librarians Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
I want to solve the following exercise from Dummit & Foote. My attempt is down below. Is it correct? Thanks! Show that the group of rigid motions of a cube is isomorphic to $S_4$. My attempt:Let us denote the vertices of the cube so that $1,2,3,4,1$ trace a square and $5,6,7,8$ are the vertices opposite to $1,2,3,4$. Let us also denote the pairs of opposite vertices $d_1,d_2,d_3,d_4$, where vertex $i$ is in $d_i$. To each rigid motion of the cube we associate a permutation of the set $A=\{ d_i \}_{i=1}^4$. Denote this association by $\varphi:G \to S_4$, where $G$ is the group of those rigid motions, and we identified $S_A$ with $S_4$. By definition of function composition we can tell that $\varphi$ is a group homomorphism. We prove that $\varphi$ is injective, using the trivial kernel characterisation: Suppose $\varphi(g)=1$ fixes all of the the pairs of opposite edges (that is we have $g(i) \in \{i,i+4 \}$ for all $i$, where the numbers are reduced mod 8). Suppose $g$ sends vertex $1$ to its opposite $5$. Then the vertices $2,4,7$ adjacent to $1$ must be mapped to their opposite vertices as well. This is because out of the two seemingly possible options for their images, only one (the opposite vertex) is adjacent to $g(1)=5$. This completely determines $g$ to be the negation map which is not included in our group. The contradiction shows that we must have $g(1)=1$, and from that we can find similarly that $g$ is the identity mapping. Since $\ker \varphi$ is trivial $\varphi$ is injective. In order to show that it is surjective, observe that $S_4$ is generated by $\{(1 \; 2),(1 \; 2 \; 3 \; 4) \}$ (this is true because products of these two elements allow us to sort the numbers $1,2,3,4$ in any way we like). We now find elements in $G$ with images under $\varphi$ being those generators. Observe that if $s$ is a $90^\circ$ rotation around the axis through the centres of the squares $1,2,3,4$ and $5,6,7,8$, such that $1$ is mapped to $2$, followed by a rotation by $120^\circ$ around the line through $2,6$ (so that $1$ is mapped to $3$), we have $\varphi(s)=(1 \; 2)$ .Observe also that if $t$ is $90^\circ$ rotation around the axis through the centres of the squares $1,2,3,4$ and $5,6,7,8$, such that $1$ is mapped to $2$, we have $\varphi(t)=(1 \; 2 \; 3 \; 4)$. Now if $\sigma \in S_4$ is any permutation, we express in as a product involving $(1 \; 2),(1 \; 2 \; 3 \;4)$, and the corresponding product involving $s,t$ is mapped to $\sigma$ by $\varphi$. This proves $\varphi$ is surjective. We conclude that $\varphi$ is an isomorphism, so $G \cong S_4$.
OPEN ACCESS Draglines are heavy and costly machines massively utilised in opencast mines to remove the overburden materials . Because of harsh operating situations advanced wear, tear, fractures and fatigue failure are generally determined in dragline. The bucket is the main component of the dragline, and it's the far supply of external loads at the equipment because of its contacts with ground material. In this study, the three-dimensional (3D) solid moving bucket models are developed, and the stress analysis has been attempted on the dragline bucket structural using finite element method (FEM). This paper investigates the stresses under maximum loading and different operating velocity conditions. dragline bucket, stress analysis, factor of safety finite element method Draglines are used to do away with the overburden for exposing the minerals in a surface mine. The draglines may be greater than 4000-ton in average weight, with bucket capacities starting from 24 m 3-120 m 3. The buckets are dragged against the blasted muck to fill the blasted overburden. The capital cost of dragline is almost Rs 500 crore for the bucket capacity of 62 m 3. A dragline bucket is a big shape that's suspended from an increase (a big truss-like structure) with cord ropes. The bucket has manoeuvred the usage of a few ropes and chains. The drag rope is used to drag the bucket assembly horizontally [1]. By way of skillful manoeuvre of the hoist and the drag ropes, the bucket is controlled for different operations. A schematic diagram of a large dragline bucket system is shown in Figure 1. It's been determined that operational and resultant stress versions are critical troubles that cause unsteady stress to set off harm of bucket and its operational life. In this study, dragline bucket evaluation consists of examination and expertise of the stress distribution and safety factor of the bucket frame in dynamic working conditions. Numerical modelling has been used for examining the key parameters. Finite element analysis (FEA) has been invariably used for simulation and analysis under different loading conditions of dragline for a long time. To develop an FEA model and to simulate the cutting process of a sub-layer formation with various geometries [2]. For the evaluation motive to evolve a 3-D finite element evaluation of soil–blade interplay primarily based on predefined horizontal and vertical failure surfaces to research the behaviour of the soil–blade interface and look at the impact of blade-cutting width and lateral boundary width on anticipated forces [3]. To develop a numerical technique to obtain the static equilibrium state of a conventional dragline excavation system, including the static pose of the bucket, as well as internal loads are acting on an element of the excavation system [4]. In this analysis to use the numerical method for the excavation of bucket filling and optimised in a complex granular flow [5, 6]. To study the components of the excavator to identify the problems faced while performing the hoisting and digging operation [7]. To design the excavator bucket with force applied at the tip of teeth of the excavator bucket to find the stresses [8]. To develop a 3D model of the bucket to determine the stresses using Abacus software [10]. Figure 1. A dragline bucket (62m 3) 2.1 Numerical modeling The finite element method (FEM) is a relatively new and effective numerical method [10]. In this paper, the finite element method was used for simulation and analysis purpose. Finite elements are used in design improvement and optimisation purpose for any mechanical parts. In this paper, ANSYS software has been used for analysis [1]. The simulation includes equivalent stress, deformation, and safety factor under different operating velocity conditions, prevalent in the field scale dragline operation. 2.2 The geometry of a dragline bucket A 3D solid model of a dragline bucket was created in the AUTO CAD. The real-time dimension of a 62 m 3 dragline bucket was taken for generating a solid model depicted in Figure 2. Figure 2. Dragline bucket with different views [9 ] Figure 3. 3D solid bucket model The AUTO CAD generated solid model file then imported to solid works and converted to IGES file, which was then imported to ANSYS 18 to apply loading and boundary condition for simulating the dragline bucket and the existing stress environment on it [1]. 2.3 Material property Table1. Material properties of dragline bucket (steel) [1] Material Density 7.85e-006 kg mm Tensile Ultimate Strength 460 MPa Tensile Yield Strength 250 MPa Poisson's Ratio 0.3 Young's Modulus 2.e+005 2.4 Determination of resistive forces for different formation property When dragline bucket is digging and moving in forwarding direction then a series of repeated formation failures occur due to the contact between the rock formation and dragline bucket. There are many analytical and empirical strategies are available for analysis of the forces that had been created for the duration of formation slicing. On this paper use the McKyes’s 2D version [11] Equation (1) and evaluation of the forces because of adhesion, weight, overloading, cohesion, and inertia to specific the resistance presented by way of the rock formation to earthmoving [9]. $\mathrm{T}=\left(\mathrm{γgd}^{2} \mathrm{N}_{\mathrm{γ}}+\mathrm{cdN}_{\mathrm{c}}+\mathrm{C}_{\mathrm{a}} \mathrm{d} \mathrm{N}_{\mathrm{ca}}+\mathrm{qdN}_{\mathrm{q}}+\mathrm{γv}^{2} \mathrm{d} \mathrm{N}_{\mathrm{a}}\right)$ (1) where, T = resultant cutting force γ = formation density d = tool depth c = cohesion g = gravitational acceleration q = overload w = cutting width Ca = adhesion v = cutting velocity of formation Nc = cohesion coefficient Nγ = weight coefficient Na = inertia coefficient Nq = overload coefficient Nca = adhesion coefficient reduced the equation (1) into Equation (2) $\mathrm{T}=\left(\mathrm{γgd}^{2} \mathrm{N}_{\mathrm{γ}}+\mathrm{cdN}_{\mathrm{c}}\right)w$ (2) By using the charts given by Hettiaratch and Reece (1974) and find out the N coefficients for the weight (γ) and cohesion (c) [9]. $\boldsymbol{N} \boldsymbol{x}=\boldsymbol{N}_{0}+\left(\boldsymbol{N}_{\varphi}-\boldsymbol{N}_{0}\right) \frac{\delta}{\Phi}$ (3) Using equation (3) and (2) the estimated values of cohesion, weight coefficient and resulting cutting force are existing in Table 2. The cutting force was estimated to be 222.93 kN. Table 2. Estimated values of input parameter cohesion strength of overburden, c (kPa) 25.0 KPa Density of overburden material, γ 2.0 (t /m External friction angle, δ (30°) Internal friction angle, φ (40°) Cohesion coefficient, Nc 3.4 Resultant cutting force, T 222.94 KN 2.5 Finite element mesh and boundary conditions The material in FEA modelling and simulation was steel with strengths of 460MPa, showing elastic-perfectly plastic behaviour. The meshing became performed the use of a 4-node linear tetrahedron continuum element. Figure 4 shows the resulting meshing body with an element size of 100 mm. Meshing pattern is shown in Figure 4. The resulting meshing body finds out 55148 solid elements and 14236 nodes. The boundary condition is illustrated in Figure 5. Figure 4. Meshing body of bucket Figure 5. Boundary condition of moving bucket Simulation in this investigation uses the external pressure of 1.37 x10 5 Pa applied to the bucket teeth, and the velocity range is given 0.50-1.5 m/s on the bucket moving in the direction. 3.1 Stress distribution on the bucket Von-mises theory has been used for the estimation of equivalent stress in the study. The von-mises yield criterion shows that yielding of a ductile material starts while the other one deviatoric stress invariant reaches an essential value [1]. The usage of this information an engineer can say his design will fail if the maximum value of Von-mises stress prompted in the material is gr2eater than the strength of the materials. It works properly in most cases, while the material is ductile. For this analysis, to develop a 3D solid model and simulates the formation cutting action of the dragline bucket. The modelling and analysis suggest the stress accumulation of stress in the teeth and hitch element of the moving bucket due to the failure of the bucket initiates in these areas predominantly. Find out the stress concentration regions that are most prone to failure in these conditions is critical planning for proper maintenance. The safety factor and von- Mises stress distribution variations are given below. Case 1. When velocity was 0.5m/s, the stress value and safety factory are illustrated in Figures 6 and 7. Figure 6. Von-misses stress variation on the moving bucket Figure 6a. Graph between stress and time Figure 7. Safety factor variation on the moving bucket Figure 7a. Graph between safety factor and time Case 2. When velocity was 1.0 m/s, stress value and safety factory are illustrated in Figures 8 and 9. Figure 8. Von-misses stress variation on the moving bucket Figure 8a. Graph between stress and time Figure 9. Safety factor variation on the moving bucket Figure 9a. Graph between safety factor and time Case 3. When velocity was 1.50 m/ s, stress value and safety factory are illustrated in Figures 10 and 11. Figure 10. Von-misses stress variation on the moving bucket Figure 10a. Graph between stress and time Figure 11. Safety factor variation on the moving bucket Figure 11a. Graph between safety factor and time Table 3. Values of stresses and safety factor with different operating velocity 0.5 147.03 1.70 1.0 290.44 0.84 1.5 433.50 0.57 Initially when the bucket moves forward for digging the muck immediately after its ground placement, then bucket velocity is 0.5 m/s. During moving and filling in the blasted muck the bucket velocity increases to 1.0 m/s and finally, it attains a maximum filling velocity of 1.5 m/s. From this simulation results, it is quite clear that when the magnitude of bucket velocity increases, from 0.5 to 1.5 m/s, then stress value also increases, and the maximum value of stress obtained was 433.50 MPa. This maximum stress value is close to the maximum tensile strength of steel. Furthermore, with an increase in bucket velocity, the bucket safety factor was also found to be minimum with a value of 0.57 near the teeth and hitch element of the bucket. The simulation results show the stress value and safety factor near the hitch elements and bucket teeth. In moving conditions of the dragline bucket, it has been observed that when the velocity of bucket changes then the stress concentration value also change near the hitch elements and the teeth of the bucket, it means the chances of failure is more dominant in these locations. In the Figures shown that when increases the velocity of the bucket then increases the stresses value and decreases the safety factor value near the hitch element and teeth. This stress value isn't enough to cause the failure of the complete dragline bucket frame. For future works, simulation of the complete dragline system under different working and loading conditions. [1] Azam, S.F., Rai, P. (2018). Modelling of dragline bucket for determination of stress. ASME Journals, 78: 392-402. [2] Mouazen, A.M., Nemenyi, M. (1999). Finite element analysis of subsoiler cutting in non-homogeneous. Soil and Tillage Research, 51(1-2): 1-15. https://doi.org/10.1016/S0167-1987(99)00015-X [3] Abo-Elnor, M., Hamilton, R., Boyle, J.T. (2004). Simulation of soil–blade interaction for sandy soil using advanced 3D finite element analysis. Soil & Tillage Research, 75(1): 61-73. https://doi.org/10.1016/S0167-1987(03)00156-9 [4] Costello, M., Kyle, J. (2004). A method for calculating static conditions of a dragline excavation system using dynamic simulation. Mathematical and Computer Modelling, 40(3-4): 233-247. https://doi.org/10.1016/j.mcm.2004.03.001 [5] Coetzee, C.J., Els, D.N.J. (2009). The numerical modelling of excavator bucket filling using DEM. J. Terramechanics, 46(5): 217-227. https://doi.org/10.1016/j.jterra.2009.05.003 [6] Coetzee, C.J., Els, D.N.J., Dymond, G.F. (2010). Discrete element parameter calibration and the modelling of dragline bucket filling. Journal of Terramechanics, 47(1): 33-44. https://doi.org/10.1016/j.jterra.2009.03.003 [7] Bende, S.B., Awate, N.P. (2013). Design, modelling and analysis of excavator arm. International Journal of Design Manuf. Technology, 4: 14-20. [8] Tupkar, M.P., Zaveri, S.R. (2015). Design and analysis of an excavator bucket. International Journal of Science Research Engineering & Technology, 4(3): 227-229. [9] Gölbaşi, O., Demirel, N. (2015). Investigation of stress in an earthmover bucket using finite element analysis. A generic model for draglines. Journal of the Southern African Institute of Mining and Metallurgy, 115(7): 623-628. https://doi.org/10.17159/2411-9717/2015/v115n7a8 [10] Abo-Elnor, M., Hamilton, R., Boyle, J.T. (2003). 3D Dynamic analysis of soil-tool interaction using the finite element method. J. Terramechanics, 40(1): 51-62. https://doi.org/10.1016/j.jterra.2003.09.002 [11] McKyes, E. (1985). Soil Cutting and Tillage. Developments in Agricultural Engineering, Elsevier, Amsterdam, 7: 1-217.
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
I can't be entirely sure but I'll make an informed guess: That value doesn't come for a single measurement. Therefore, if the error in the age of a single sample is $\pm125$ kyr, you just need to average 16 samples to get it down to $\pm31$ kyr. The uncertainty in the addition (or substraction) of two or more quantities is equal to the square root of the addition of the squares of the uncertainties of each quantity (assuming they arise from random errors). For example, if we have quantity A with uncertainty $\sigma_a$ and quantity B with uncertainty $\sigma_b$, the error in the quantity $C=A+B$ would be: $\sigma_c=\sqrt{\sigma_a^2+\sigma_b^2}$ And if we call M to the average between A and B. The uncertainty in the average is $\sigma_m=\frac{\sqrt{\sigma_a^2+\sigma_b^2}}{2}$ So if we average 16 samples with $\sigma=125$ kyr, the uncertainty in the average would be $\sigma_m=\frac{\sqrt{16 \sigma^2}}{16}=\frac{\sqrt{16 \times 125^2}}{16}=31$ kyr Uncertainty propagation can be seen in the abstract of the article you refer to: The extinction occurred between 251.941 ± 0.037 and 251.880 ± 0.031 Mya, an interval of 60 ± 48 ka. Where the ±48 ka comes from? $\sqrt{37^2+31^2}=48$ This treatment of uncertainties assumes that uncertainties are independent of each other. As @Mark pointed in the comments, this won't be the case if the uncertainty comes from "the length of your measuring stick (the half-life of U-238)". This is: if you measure something with a "yard-stick" that have the wrong size, you can't reduce the resulting error by just averaging many measurements. I don't know enough of geochronology to understand all the different errors they report. But the simple example in the abstract cite I presented above shows that they are indeed treating those errors as independent. Otherwise the reported interval error (±48 ka) would not make sense. If one significant source of error is indeed the uncertainty in the half-life of $^{238}$U. However, it would be wrong to treat these errors as independent only if there exists an exact value for this half-life. Alternatively, maybe there is no exact value of the half-life, and what they meant is that the half-life value can truly vary a 0.05%. This is something to look into if you want to figure out if the treatment of errors they do is correct. However, after a quick google search I found that radioactive half-life can indeed vary by a small fraction due to environmental conditions, this article explains pretty well the phenomena. Here a short excerpt: ...radioactive half-life of an atom can depend on how it is bonded to other atoms. Simply by changing the neighboring atoms that are bonded to a radioactive isotope, we can change its half-life. However, the change in half-life accomplished in this way is typically small. For instance, a study performed by B. Wang et al and published in the European Physical Journal A was able to measure that the electron capture half-life of beryllium-7 was made 0.9% longer by surrounding the beryllium atoms with palladium atoms. If the 0.05% uncertainty on the half-life of $^{238}$U comes from random environmental factors, it would indeed be acceptable to consider them as an independent source of error for each sample.
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Okay so let me begin. Mathematica itself does a semi-good job here. It find 2 solutions where one is not applicable for your startingcondition. So it finds the only solution which is 1/Sqrt[1 - 2 HeavisideTheta[-2 + t]] But this only seems to work for $t<2$ and this is indeed right (we'll see later). If we apply the Fourier- or Laplactransform we get what we deserve. (nothing we can work with because the $\delta$-function forces us to know $x(2)$) $$x(t)=\theta(t-2)\cdot x(2)+C$$Since 2 is a exact discontinuity we cannot use this.But we can use the good old approach of seperation of variables so we solve it by hand:$$\frac{\text{d}x(t)}{\text{d}t}=x^3(t)\cdot\delta(t-2)$$$$\int\frac{1}{x^3(t)}\text{d}x(t)=\int\delta(t-2)\text{d}t$$$$-\frac{1}{2x^2(t)}=\theta(t-2)+C$$Now we would like to use $(...)^{-1}$ but this will limit us to the domain of $t>2$ but this is okay, Mathematica solved it already for $t<2$.We get:$$x(t)=\pm\sqrt{C-\frac{\theta(t-2)}{2}}$$with your condition:$$x(t)=\pm\sqrt{1-\frac{\theta(t-2)}{2}}=const((2<t)\wedge(2>t))$$ So we look into the visualization: Plot[{1/Sqrt[1-2 HeavisideTheta[-2+t]],Sqrt[1-HeavisideTheta[t-2]/2]},{t,-3,3},PlotRange->{{-3,3},{-5,5}},PlotStyle->{{Red},{Blue,Dashed}},PlotLegends->"Expressions"] Oh look we have won! Our derived expression suprisingly also describes $t<2$.Let's confirm that: equ=x'[t]==x[t]^3*DiracDelta[t-2]; form=Sqrt[1-HeavisideTheta[t-2]/2]; FullSimplify[equ/.{x[t]->form,x'[t]->D[form,t]},{t\[Element]Reals,t<2}] FullSimplify[equ/.{x[t]->form,x'[t]->D[form,t]},{t\[Element]Reals,t>2}] FullSimplify[equ/.{x[t]->form,x'[t]->D[form,t]},{t\[Element]Reals,t==2}] True True -(DiracDelta[0]/(2 Sqrt[4-2 HeavisideTheta[0]]))==DiracDelta[0] (1-HeavisideTheta[0]/2)^(3/2) Year. Exactly what we expected. Our formula desribes it perfectly EXCEPT in $t=2$.
I) Recall first the $\phi\phi$-Operator Product Expansion (OPE): $$\tag{A} {\cal R}\left\{\phi(z,\bar{z})\phi(w,\bar{w})\right\} ~-~: \phi(z,\bar{z})\phi(w,\bar{w}):~=~C(z,\bar{z};w,\bar{w}) ~{\bf 1}, $$ where the contraction is assumed to be a $c$-number: $$\tag{B} C(z,\bar{z};w,\bar{w})~=~ \langle 0 | {\cal R}\left\{\phi(z,\bar{z})\phi(w,\bar{w})\right\} |0\rangle ~=~ -\frac{\alpha^{\prime}}{2\pi}\ln\frac{|z-w|^2+a^2}{(2R)^2},$$cf. e.g. this Phys.SE post.Besides the IR cut-off $R>0$, we have added an UV regulator $a>0$ in the contraction (B). Hence in a coinciding world-sheet point, the contraction remains finite $$\tag{C} C(z,\bar{z};z,\bar{z})~=~ -\frac{\alpha^{\prime}}{\pi}\ln\frac{a}{2R}.$$ II) The ${\cal R}$ symbol in eqs. (A)-(B) denotes radial ordering. Note that many authors don't write the radial ordering symbol ${\cal R}$ explicitly, cf. e.g. OP's eq. (1). It is often only implicitly implied in the notation. Radial order ${\cal R}$ has an interpretation as time ordering, and is necessary in order to make contact with the pertinent operator formalism and correlation functions. Note in particular that the exponential in OP's eq. (2) can be written with radial order $$ \tag{D} {\cal R}\left\{ e^{ik\phi(z,\bar{z})}\right\}. $$ III) We next use Wick's theorem between the radial and the normal ordering: $$\tag{E} {\cal R}\left\{{\cal F}[\phi]\right\} ~=~ \exp \left(\frac{1}{2} \int\! d^2z~d^2w~ C(z,\bar{z};w,\bar{w})\frac{\delta}{\delta\phi(z,\bar{z})} \frac{\delta}{\delta\phi(w,\bar{w})} \right) :{\cal F}[\phi]:, $$ cf. e.g. Ref. 1 and my Phys.SE answer here. Here ${\cal F}[\phi]$ denotes an arbitrary functional of the field $\phi$. For the exponential (D), the Wick's theorem (E) becomes $$ {\cal R}\left\{ e^{ik\phi(z,\bar{z})}\right\} ~\stackrel{(E)}{=}~\exp \left(\frac{(ik)^2}{2}C(z,\bar{z};z,\bar{z})\right) :e^{ik\phi(z,\bar{z})}: $$ $$\tag{F}~\stackrel{(C)}{=}~ \exp \left(\frac{\alpha^{\prime}k^2}{2\pi}\ln\frac{a}{2R}\right) :e^{ik\phi(z,\bar{z})}:~=~\left(\frac{a}{2R}\right)^{\frac{\alpha^{\prime}k^2}{2\pi}} :e^{ik\phi(z,\bar{z})}, $$ which is OP's sought-for formula (2). IV) Alternatively, assume that the field $$ \tag{G} \phi(z,\bar{z})~=~\varphi(z,\bar{z}) + \varphi(z,\bar{z})^{\dagger}, \qquad \varphi(z,\bar{z})|0\rangle~=~0,$$ can be written as a sum of an annihilation and a creation part. Then the UV-regularized OPE (A) becomes $$ [\varphi(z,\bar{z}),\varphi(z,\bar{z})^{\dagger}]~\stackrel{(G)}{=}~\phi(z,\bar{z})\phi(z,\bar{z}) ~-~: \phi(z,\bar{z})\phi(z,\bar{z}):$$$$\tag{H} ~\stackrel{(A)}{=}~C(z,\bar{z};z,\bar{z}) ~{\bf 1}. $$ Moreover, the vertex-operator becomes $$ :e^{ik\phi(z,\bar{z})}:~\stackrel{(G)}{=}~e^{ik\varphi(z,\bar{z})^{\dagger}}e^{ik\varphi(z,\bar{z})}~\stackrel{\text{BCH}}{=}~e^{ik\varphi(z,\bar{z})^{\dagger}+ik\varphi(z,\bar{z})+\frac{1}{2}[ik\varphi(z,\bar{z})^{\dagger},ik\varphi(z,\bar{z})]}$$ $$\tag{I}~\stackrel{(H)}{=}~ e^{ik\phi(z,\bar{z})}e^{\frac{k^2}{2}C(z,\bar{z};z,\bar{z})}~\stackrel{(C)}{=}~ e^{ik\phi(z,\bar{z})}\exp \left(-\frac{\alpha^{\prime}k^2}{2\pi}\ln\frac{a}{2R}\right) ,$$ which again leads to OP's sought-for formula (2). In eq. (I) we have used the truncated BCH formula, see e.g. this Phys.SE post. References: J. Polchinski, String Theory, Vol. 1; p. 39, eq. (2.2.7).
The Navier Stokes equation: apply Newton’s Second Law to a small ‘blob’ of fluid moving with the flow (imagine we can watch it as it goes). Normal pressure forces in x-direction Net pressure force in x-direction Vector pressure force Viscous forces: take bulk fluid component u in the x-direction- assume it is independent of x and y. Tangential viscous stress μ is the dynamic viscosity (also known as η in kinetic theory, just to stop you getting complacent). Net contribution to viscous force on blob Including x and y variation when $latex(nabla.\bold{u})=0$ (incompressible) Gravity force: mg downwards Putting it all together- acceleration where is the kinematic viscosity. Now to get the acceleration in terms of u: The Lagrangian view is to follow the blob as it moves with the flow. The blob’s position is , velocity , acceleration . Of course, it’s hardly easy in practice to sit and watch a blob, so it’s also useful to have another way of thinking- the Eulerian picture. Consider a time-varying velocity field referred to fixed points in space Langrangian velocity $\frac{d\bold{r}}{dt}=\bold{u}$ Eulerian velocity Differentiate wrt t where is the material or advective derivative (see below). Putting it all together, we finally get the Navier-Stokes equation for viscous fluid flow under gravity- Material derivative: the time rate of change following the motion of a fluid blob, rather than the rate of change at a fixed point (). Since is a non-linear term, this makes the maths harder, and in three dimensions, the non-linearity allows for chaos (the mathematical kind, rather than the realisation that you can’t work out the equations). Mass conservation continuity equation: Net mass inflow per unit time = rate of increase of mass in container Net mass inflow in x-direction per unit time = ~ = Taylor expansion Including y and z inflow -> total inflow = Rate of increase of mass in container = For a fixed container, ΔV=constant. using a vector identity Following a blob of incompressible density, . Incompressibility is a good approximation for liquids and gases where , the speed of sound. Under incompressible conditions, sound waves can be neglected. Rectilinear flow between parallel plates in x-direction Now use the Navier-Stokes equation, but neglect gravity. x-component: -> as the other terms are zero. If we take the x-derivative of the above, all terms involving u must vanish, because So and is independent of x. y and z components: Now make the further assumption that the flow is steady and y-independent. (steady) and For steady flow, p is also independent of t, so is a constant. Define , the pressure gradient. Putting this all into the original x-component equation, we end up with Boundary conditions: for viscous flow, the no slip condition means that the fluid comes to rest at the walls, i.e. u=0 when z=±h. We get a parabolic flow profile- an example of Poiseulle flow. Volume flux per unit y distance for the above flow is Flow in a circular pipe Assume flow is independent of θ and t -> u=u(r) taking the radial component of del-squared. Boundary conditions: u=0 at r=a (pipe wall- no slip condition again) and no singularity at r=0. putting r=0 -> const=0 $u=\frac{-Gr^2}{4\mu}+const$ As u=0 at r=a, const= Volume flux= Average speed= Reynolds number, Re, is a dimensionless number used to quantify the importance of viscosity. We do this by considering the ratio of inertial forces to viscous forces, and then simplifying with scale analysis. where U=typical velocity scale, L=typical length scale of variation If Re>>1, viscosity is usually unimportant If Re<<1, viscosity is usually important For Re>Re crit~1000, flow becomes turbulent (rectilinear flow is unstable). Below this value, the flow remains rectilinear. Flow between rotating cylinders ( i r, iand θ iare unit vectors in the respective directions z r, θand z.) Inner cylinder is fixed. Outer cylinder rotates with angular velocity Ω. We want steady, incompressible flow with circular symmetry Neglect end effects and say there is no variation in z-direction. Take the Navier-Stokes equation in cylindrical polar coordinates, neglecting gravity. -> The two terms on the LHS are centrifugal and pressure gradient terms respectively, whilst the RHS terms are viscous terms. i r and terms vanish separately. iθ For ν≠0 Try -> -> Boundary conditions: u θ=0 at r=aand uat θ=Ωb r=b At r=a: At r=b: We can now work out the torque needed to keep the inner cylinder at rest. Tangential viscous stress on inner cylinder: per unit area of cylinder Torque per unit z-distance on inner cylinder: This can be used as a way of measuring μ (viscometer) As Ω increases, laminar flow breaks down and we see waves and turbulence. Finally, to find the pressure distribution, use Vorticity, w, is the curl of the velocity field Vorticity is the measure of the local ‘spin’ or ‘rotation’ of the fluid, but it is not in general a measure of large scale rotation. Note: for 2D flow in the xy plane, w is of course in the z-direction. a) ‘ Solid body’ rotation Imagine dropping a ‘vorticity meter’ into the flow flow gets faster as r increases, meter spins. b) ‘ Line vortex’ for r≠0 -> flow is irrotational. flow gets slower as r increases, in just such a way that the meter doesn’t spin. c) Linear shear flow No large scale rotation, but so there is local ‘spin’. The vorticity equation: start with the Navier-Stokes equation, taking an incompressible fluid with ρ and ν constant. Vector identity Now take the curl of this equation, remembering that always and Vector identity The last two terms must vanish because we already established that and of course With all this in mind, we get the vorticity equation. Conservation of vorticity in 2D inviscid flow: for inviscid flow, ν=0 (Re=∞). Considering 2D flow, u=( u, v, 0), w=(0, 0, w) So So, following the motion of a fluid blob, the vorticity of each blob is conserved. In particular, if the fluid is irrotational everywhere at t=0, then it remains irrotational for t>0$ 2D inviscid irrotational flow Irrotational, so Because , we can say that u is the gradient of some velocity potential φ ; ; Incompressible, so Laplace’s equation. Bernoulli’s equation in 3D (take y-axis as ‘up’) Go back to Put ν=0, w=0 (inviscid, irrotational flow), so that we can say $\latex \frac{\partial}{\partial t}(\nabla \phi)+\nabla(\frac{1}{2}|\bold{u}|^2+\frac{p}{\rho}+gy)=0$ i.e. the contents of the bracket on the LHS must be independent of x, y, z, so But we can absorb F(t) into φ (remember, you can arbitrarily add constants to a potential without changing the gradient- all we care about is that So put So now Bernoulli’s equation. We can also set the RHS equal to a constant other than 0. 2D inviscid, irrotational flow around a cylinder Put a cylinder into a uniform flow ( U, 0, 0); how does the flow go around the cylinder? In cylindrical polar coordinates, Laplace’s equation becomes Boundary conditions u r=0 on r=a(normal velocity=0) and u≠0 at θ r=a(because there is no viscosity, the fluid can slip on the surface). As , and All solutions must have the same θ dependence, so the solutions of Laplace’s equation that we need are for r≥a So Using Bernoulli’s equation, with and g=0 On cylinder The constant is usually set so that as , The pressure on the cylinder has ‘left-right’ and ‘up-down’ symmetry (i.e. we can replace θ with π-θ or –θ and nothing changes), so because the flow is inviscid, there is no pressure induced drag on the cylinder. Aerodynamic extension: now add an asymmetric ‘swirl’ term to the potential we found earlier. A>0 Now Where the last term is like a line vortex, leading to additional clockwise swirl but zero vorticity for r≠0 Now, on the cylinder There is still ‘left-right’ symmetry, but not ‘up-down’ symmetry, so we have a lift force (pressure is greater below the x-axis than above). The streamfunction: because of incompressibility, , i.e. in 2D. We can write and For an irrotational flow, we get Laplace’s equation for ψ as well as φ. For our standard 2D flow example, , so the velocity is parallel to the lines of constant ψ (and perpendicular to the lines of constant φ).
I don't understand exactly what you are looking for, I'll try to explain the Curry-Howard correspondence in a nutshell, you'll let me know if it helps. The Curry-Howard correspondence (or isomorphism, if you wish) definitely links the three objects you mention: it actually tells that two of them, IL and $\lambda$c, are the same thing. The term is used today in a broad sense and often without referring to a specific technical statement, but it can be formulated precisely in at least two frameworks: Hilbert-style deduction for intuitionistic logic and SK combinators; Gentzen's natural deduction and the $\lambda$-calculus. The first is actually the "original" correspondence, dating back to the late 50s when Curry observed that the types you would give to the SK combinators$$\begin{align*}S &:(A\rightarrow B\rightarrow C)\rightarrow(A\rightarrow B)\rightarrow A\rightarrow C \\K &:A\rightarrow B\rightarrow A\end{align*}$$are exactly the axioms of implicational propositional intuitionistic logic in Hilbert-style deduction. Note that this latter has only modus ponens as a rule for building proofs; accordingly, terms in combinatory logic are built using only application. That's it for the combinatory logic side. It took another decade before Howard fully formulated the correspondence in the second guise, which I think looks best: on the logical side, you have $\mathbf{NJ}^{\Rightarrow,\land}$, Gentzen's natural deduction system for the negative fragment of propositional intuitionistic logic. You have propositional formulas defined by $A,B::=\alpha\mathrel{|} A\Rightarrow B\mathrel{|} A\land B$ (atom, implication and conjunction), proofs built from introduction and elimination rules for each connective, and you have proof normalization, which consists in 2 rewriting rules replacing what Prawitz calls " detours" (an introduction of a connective immediately followed by its elimination) with more explicit pieces of proof. This rewriting procedure closely corresponds to cut-elimination in sequent calculus, also introduced by Gentzen. On the computational side, you have the simply-typed $\lambda$-calculus $\Lambda^{\rightarrow,\times}$, with types defined by $T,U::=o\mathrel{|}T\rightarrow U\mathrel{|} T\times U$ (base, function space and pairs), terms built from constructors ($\lambda$ for $\rightarrow$ and pairing for $\times$) and destructors (application for $\rightarrow$ and projections for $\times$), and $\beta$-reduction, which consists of 2 rewriting rules: the usual one and one for handling projections acting on a pair. The Curry-Howard correspondence, which is the subject of the book you are reading, has three levels: formulas = types: just change the notation swapping $\alpha/o$, $\Rightarrow/\rightarrow$ and $\land/\times$; proofs = terms: more precisely, introduction rule=constructor, elimination rule=destructor; normalization = reduction: this is harder to write inline, but it's obvious once you write the rewriting rules of the two systems side by side. The correspondence is one-to-one, i.e., one step in $\mathbf{NJ}^{\Rightarrow,\land}$ corresponds to exactly one step in $\Lambda^{\rightarrow,\times}$ and vice versa. At this level, I hesitate to call this an "isomorphism" because it is not entirely clear what structures are being preserved. Here's where category theory may be of help: if you formulate $\mathbf{NJ}^{\Rightarrow,\land}$ and $\Lambda^{\rightarrow,\times}$ as categories (without being too precise: formulas/types as objects and normal proofs/normal terms as morphisms), then they are isomorphic as Cartesian-closed categories (CCC). Indeed, as I defined them, they both are the free CCC on one object ($\alpha$ if you are a logician or $o$ if you are a computer scientist). So, there you go, you now have a third object into the picture and you get what Robert Harper calls the Holy Trinity (logic, programming languages and categories). Actually, the above categorical view hides a bit what I think is the most important aspect of the Curry-Howard correspondence, which is normalization = reduction. Somewhat annoyingly, this is left out in current alternative terminologies for the correspondence: people say "proofs as programs" or "formulae as types", nobody says "cut-elimination as computation". To properly accommodate the third level, you'd have to climb a bit up on the higher-dimensional ladder and make $\mathbf{NJ}^{\Rightarrow,\land}$ and $\Lambda^{\rightarrow,\times}$ into 2-categories, or even more (things make sense up to dimension 3 at least). If you're looking for concrete examples, the above framework isn't very rich but already gives you an idea. Take the second-order definition of natural number: $$\mathsf{Nat}(x):=\forall\alpha.(\forall y.\alpha(y)\Rightarrow\alpha(\mathsf{s}y))\Rightarrow\alpha(0)\Rightarrow\alpha(x)$$($x$ is an integer if it satisfies every property $\alpha$ which is true for $0$ and is true for $y+1$ as soon as it is true for $y$). Now, erase all first-order information and second-order quantification. You get$$(\alpha\Rightarrow\alpha)\Rightarrow\alpha\Rightarrow\alpha$$Translated in types, this is$$\mathsf{N}:=(o\rightarrow o)\rightarrow o\rightarrow o.$$Guess who are the normal forms of type $\mathsf N$? Exactly the (typed version of the) Church numerals. So every term $\mathsf{N}\rightarrow\mathsf{N}$ is a function on integers. If you keep the first and second order information, you get much more. For instance, the proof in second-order Peano arithmetic that addition is total, i.e., that$$\vdash\forall x.\forall y.\mathsf{Nat}(x)\land\mathsf{Nat}(y)\Rightarrow\mathsf{Nat}(x+y)$$gives you a $\lambda$-term of type $\mathsf{N}\times\mathsf{N}\rightarrow\mathsf{N}$. Guess what functions it computes? Well, addition, of course (see Krivine's book "Lambda-calculus: Types and Models"). Cool, isn't it? Another classical source to learn about Curry-Howard is Girard, Lafont and Taylor's book "Proofs and Types" (but you may already know that, it's in the bibliography of the book you are reading).
I have a package for numerically calculating solutions of eigenvalue problems using the Evans function via the method of compound matrices, which is hosted on github. See my answers to other questions or the github for some more details. First we install the package (only need to do this the first time): Needs["PacletManager`"] PacletInstall["CompoundMatrixMethod", "Site" -> "http://raw.githubusercontent.com/paclets/Repository/master"] Then we first need to turn the ODEs into a matrix form $\mathbf{y}'=\mathbf{A} \cdot \mathbf{y}$, using my function ToMatrixSystem: Needs["CompoundMatrixMethod`"] sys = ToMatrixSystem[{λ F'''[x] - 2 λ β F''[x] + ((λ β - 1) β - μ) F'[x] + β^2 F[x] == 0}, {F[0] == 0, F''[0] == β F'[0], F''[1] == β F'[1]}, F, {x, 0, 1}, μ] The object sys contains the matrix $\mathbf{A}$, as well as similar matrices for the boundary conditions and the range of integration. Now the function Evans will calculate the Evans function (also known as the Miss-Distance function) for any given value of $\mu$ provided that $\lambda$ and $\beta$ are specified; this is an analytic function whose roots coincide with eigenvalues of the original equation. Evans[1, sys /. {λ -> 1, β -> 2}] (* -0.023299 *) We can also plot this: Plot[Evans[μ, sys /. {λ -> 1, β -> 1}], {μ, -300, 300}, AxesLabel -> {"μ", "E(μ)"}] For these parameter values you can see there are six eigenvalues within the plot region, the function findAllRoots (by user Jens, available from this post) will give you them all: findAllRoots[Evans[μ, sys /. {λ -> 1, β -> 1}], {μ, -300, 300}] (* {-247.736, -158.907, -89.8156, -40.4541, -10.7982, -0.635232} *) For the values $\lambda=0.02, \beta=10$, it helps to remove the normalisation that is applied by default in Evans, and I've also plotted the values given by bbgodfrey's solution: Plot[Evans[μ, sys /. {λ -> 1/50, β -> 10}, NormalizationConstants -> 0], {μ, -21, -2}, Epilog -> Point[{#, 0} & /@ {-20.88, -17.48, -14.36, -11.53, -9.03, -6.92, -5.26, -4.08, -3.16}]] It looks like there is an infinite set of solutions for negative $\mu$ for this set of parameters. For $\beta=0$ the eigenvalues are $-\lambda (n \pi)^2$, the general behaviour as $\mu->-\infty$ looks the same. As you vary $\beta$ and $\lambda$ you can see how the function changes and eigenvalues can move and coalesce. Note there is a discrepancy with bbgodfrey's results, which missed the root at $-3.392$ and gave a spurious one at $-3.165$. That isn't to say that you can't manage to get spurious solutions by applying FindRoot to this technique if you aren't careful. Additionally, as the Evans function is (complex) analytic, you can use the Winding Number to find how many roots are within a contour. I have some functions for this, so you can see that there are only these 9 roots within the circular contour of size 21 located at the origin: PlotEvansCircle[sys /. {λ -> 1/50, β -> 10}, ContourRadius -> 21, nPoints -> 500, Joined -> True] Here the left hand side is a circle in the complex plane, and the right hand side is the Evans function applied to the points on that circle. The winding number is how many times the right hand contour goes round the origin. There is also a PlotEvansSemiCircle for seeing if there is a root in the positive half-plane (as that is a common question for instability). To get the eigenfunctions, replace the right BC with $F'(0)$ to some arbitrary value and use NDSolve. This is at high precision to check the boundary conditions are being met: Block[{β = 10, λ = 1/50}, μc = μ /. FindRoot[Evans[μ, sys, NormalizationConstants -> 0, WorkingPrecision -> 50], {μ, -3}, WorkingPrecision -> 50, AccuracyGoal -> 15, PrecisionGoal -> 15] // Quiet; sol = First[NDSolve[{λ F'''[x] - 2 λ β F''[x] + ((λ β - 1) β - μ) F'[x] + β^2 F[x] == 0, F[0] == 0, F''[0] == β F'[0], F'[0] == 1/10} /. μ -> μc, F, {x, 0, 1}, WorkingPrecision -> 50, AccuracyGoal -> 30, PrecisionGoal -> 30]]; Plot[(F[x]/F[1]) /. sol, {x, 0, 1}, PlotLabel -> "Eigenfunction for μ=" <> ToString@Round[μc, 0.0001] <> ", with BC error: " <> ToString@Round[((F''[1] - β F'[1]) /. sol), 0.0001], AxesLabel -> {"x", "F(x) (scaled)"}, PlotRange -> All] ] The eigenfunctions get increasingly oscillatory as the eigenvalue gets more negative: roots = Reverse@findAllRoots[Evans[μ, sys /. {λ -> 1/50, β -> 10}, NormalizationConstants -> 0], {μ, -200, -2}] Plot[Evaluate[Table[(F[x]/F[1]) /. NDSolve[{λ F'''[x] - 2 λ β F''[x] + ((λ β - 1) β - μ) F'[x] + β^2 F[x] == 0, F[0] == 0, F''[0] == β F'[0], F'[0] == 1/1000} /. {λ -> 1/50, β -> 10}, F, {x, 0, 1}], {μ, roots}]], {x, 0, 1}, PlotRange -> All, AxesLabel -> {"x", "F(x), scaled"}] The method will work for higher order systems (up to 10th order generally), and does not require DSolve to be able to solve the underlying ODE.
How is this trigonometric relation derived in simple terms? $$\cos\left(\frac{2\pi}{N}\right) = 1 - 2\sin^2\left(\frac{\pi}{N}\right)$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community How is this trigonometric relation derived in simple terms? $$\cos\left(\frac{2\pi}{N}\right) = 1 - 2\sin^2\left(\frac{\pi}{N}\right)$$ This question appears to be off-topic. The users who voted to close gave this specific reason: You know that $\cos(2\theta) = 1 - 2\sin^2(\theta)$, and.... actually, that's it! (These are the double-angle identities.) In isosceles triangle $ABC$ with $AB=AC=1,$ let $D$ be the midpoint of$ BC , $ let $\angle BAD=\angle CAD=X. $ We have $\angle BDA=\pi/2$ , so $$CB= 2DB=2AB \sin X =2\sin X. $$ $$\text {Hence } CB^2=4\sin^2 X.$$The Theorem of Pythagoras implies the Cosine Formula : $$CB^2=AB^2+AC^2-2.AC.AB.\cos \angle BAC.$$ With $AB=AC=1$ and $\angle BAC=2X$ we have $$CB^2=2-2\cos 2X.$$ Therefore $4\sin^2X=2-2\cos^2 2X.$ And $X$ can be any angle between $0$ and $\pi/2$.It is easy now to show this equation holds for all $X$.
I'm having a little trouble with correlation functions wick theorem and ordering in the context of OPE and CFT, for string theory. (1) My first question, the propagator is: $$<X(z) X(w)> = \frac{\alpha}{2} \ln(z-w).$$ In the context of primary operators it's easy to see that $X$ it's not a good conformal field. But $\partial X$ yes, so I need to get: $$<\partial X(z) \partial X(w) >$$ which I can get from the propagator of $X$ by taking two derivatives, if I take the first one: $$\partial < X(z) X(w) > = <\partial X(z) X(w) > + <X(z) \partial X(w)>$$ But this seem to get the wrong result. So I guess that the derivative is: $$\partial <X(z) X(w) > = <\partial X(z) X(w) >$$ If I want to take the second derivative the result seems to be: $$\partial <\partial X(z) X(w)> = <\partial X(z) \partial X(w). $$ But I don't understand why I should want that derivative and not: $$\partial <\partial X(z) X(w)> = <\partial^2 X(z) X(w)>.$$ (2) Regarding normal ordering and Wick's theorem, I have the following definition of normal ordering: $$T = \frac{-1}{\alpha} :\partial X \partial X: = \frac{-1}{\alpha} \lim_{z \to w} (\partial X(z) \partial X(w) - <\partial X(z)\partial X(w)>)$$ And the condition: $$<T> = 0$$ But what happens if I want to compute this: $$T(z) T(w) = \frac{1}{\alpha^2} : \partial X(z) \partial X(z) : :\partial X(w) \partial X(w): $$ What's the meaning of product of normal ordered operators?
We already learnt about the vectors and their notations in the previous physics article. Now, we will discuss vector addition and subtraction. Vector Addition: The vector addition is done based on the Triangle law. Let us see what triangle law of vector addition is: Suppose there are two vectors: \(\overrightarrow{a}\) and \(\overrightarrow{b}\) Now, draw a line \(AB\) representing \(\overrightarrow{a}\) with \(A\) as the tail and \(B\) as the head. Draw another line \(BC\) representing (\(\overrightarrow{b}\)) with \(B\) as the tail and \(C\) as the head. Now join the line \(AC\) with \(A\) as the tail and \(C\) as the head. The line \(AC\) represents the resultant sum of the vectors \(\overrightarrow{a}\) and \(\overrightarrow{b}\) The line AC represents \(\overrightarrow{a}\) + \(\overrightarrow{b}\) The magnitude of \(\overrightarrow{a}\) + \(\overrightarrow{b}\) is: \(\sqrt{a^2~+~b^2~+~2ab~cos~\theta}\) Where, \(a\) = magnitude of vector \(\overrightarrow{a}\) \(b\) = magnitude of vector \(\overrightarrow{b}\) \(\theta\) = angle between \(\overrightarrow{a}\) and \(\overrightarrow{b}\) Let the resultant make an angle of \(\phi\) with \(\overrightarrow{a}\), then: \(tan\phi\) = \(\frac{b~sin~\theta}{a~+~b~cos~\theta}\) Let us understand this by the means of an example. Suppose there are two vectors having equal magnitude \(A\), and they make an angle \(θ\) with each other. Now, to find the magnitude and direction of the resultant, we will use the formulas mentioned above. Let the magnitude of the resultant vector be \(B\) \(B\) = \(\sqrt{A^2~+~A^2~+~2AA~cos~\theta}\) = \(2~A~cos~\frac{θ}{2}\) Let’s say that the resultant vector makes an angle \(Ɵ\) with the first vector \(tan~\phi\) = \(\frac{A~sin~θ}{A~+~A~cos~θ}\) = \(tan~\frac{θ}{2}\) Or, \(Ɵ\) = \(\frac{θ}{2}\) Vector Subtraction: Subtraction of two vectors is similar to addition. Suppose \(\overrightarrow{a}\) is to be subtracted from \(\overrightarrow{b}\). \(\overrightarrow{a}\) – \(\overrightarrow{b}\) can be said as the addition of the vectors \(\overrightarrow{a}\) and (-\(\overrightarrow{b}\)). Thus, the formula for addition can be applied as: \(\overrightarrow{a}\) – \(\overrightarrow{b}\) = \(\sqrt{a^2~+~b^2~-~2ab~cos~\theta}\) (-\(\overrightarrow{b}\)) is nothing but \(\overrightarrow{b}\) reversed in direction. Stay tuned with Byju’s to learn more about vectors, vector notation and much more. Related articles:
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
Nodes are the points in space around a nucleus where the probability of finding an electron is zero. However, I heard that there are two kinds of nodes, radial nodes and angular nodes. What are they and what information do they provide of an atom? Nodes are the points in space around a nucleus where the probability of finding an electron is zero. However, I heard that there are two kinds of nodes, 1. How to get the number and type of nodes for an orbital As you said, nodes are points of zero electron density. From the principal quantum number $n$ and the azimuthal quantum number $\ell$, you can derive the number of nodes, and how many of them are radial and angular. $$\text{number of nodes}=n-1$$ $$\text{angular nodes}=\ell$$ $$\text{radial nodes}=(\text{number of nodes})-(\text{angular nodes})$$ So each type of orbital ($s, p, d$ etc) has its own unique, fixed number of angular nodes, and then as $n$ increases, you add radial nodes. Examples: First shell For the first shell, $n=1$, which means the number of nodes will be 0. Examples: Second shell For the second shell, $n=2$, which yields 1 node. For the $2s$ orbital, $\ell = 0$, which means the node will be radial For the $2p$ orbital, $\ell = 1$, which means the node will be angular Examples: Third shell The third shell, $n=3$, yielding $3-1=2$ nodes. The $3s$ orbital still has $\ell = 0$ meaning no angular nodes, and thus the two nodes must be radial The $3p$ orbital still has one angular node, meaning there will be one radial node as well The $3d$ orbital has two angular nodes, and therefore no radial nodes! 2. The difference between radial and angular nodes Radial nodes are nodes inside the orbital lobes as far as I can understand. Its easiest to understand by looking at the $s$-orbitals, which can only have radial nodes. To see what an angular node is, then, let's examine the $2p$-orbital - an orbital that has one node, and that node is angular. We see that angular nodes are not internal countours of 0 electron probability, but rather is a plane that goes through the orbital. For the $2p_\text{z}$-orbital, the angular node is the plane spanned by the x- and y-axis. For the $2p_\text{y}$-orbital, the angular node is the plane spanned by the z- and x-axis. The accepted answer has nice pictures, but is perhaps somewhat lacking in rigour. Here's a bit more maths. Atomic orbitals, which are one-electron wavefunctions, are split into two components: the radial and angular wavefunctions $$\psi_{nlm}(r,\theta,\phi) = R_{nl}(r)Y_{lm}(\theta,\phi)$$ so-called because they only have radial ($r$) and angular ($\theta$,$\phi$) dependencies respectively. If either of these two components are zero, the total wavefunction is zero and the probability density there (given by $\psi^*\psi$) is also zero. A radial node occurs when the radial wavefunction is equal to zero. Since the radial wavefunction only depends on $r$, this obviously means that each radial node corresponds to one particular value of $r$. (The radial wavefunction may be zero when $r = 0$ or $r \to \infty$, but these are not counted as radial nodes.) An angular node is analogously simply a region where the angular wavefunction is zero. 1 In the case of the p-orbitals, this is a plane, although angular nodes are not necessarily planes. The number of radial and angular nodes is dictated by the forms of the wavefunctions, which are derived by solving the Schrodinger equation. For a given orbital with quantum numbers $(n,l)$, there are $n-l-1$ radial nodes and $l$ angular nodes, as previously described. An example Let's take a look at one of the hydrogen 3d wavefunctions 2 with $(n,l) = (3,2)$. $a_0$ is the Bohr radius of the hydrogen atom. We would expect $3-2-1 = 0$ radial nodes and $2$ angular nodes. $$\mathrm{3d}_{z^2} = \underbrace{\left[\frac{4}{81\sqrt{30}}a_0^{-3/2}\left(\frac{r}{a_0}\right)^2 \exp{\left(-\frac{r}{3a_0}\right)}\right]}_{\text{radial: }R_{32}} \underbrace{\left[\sqrt{\frac{5}{16\pi}}(3\cos^2\theta - 1)\right]}_{\text{angular: }Y_{20}}$$ All the individual terms in the radial wavefunction can never be zero (excluding the cases of $r = 0$ or $r \to \infty$ as I described earlier). Therefore, this orbital has no radial nodes. Surprise, surprise. The angular nodes are more interesting. The angular wavefunction vanishes when $3 \cos^2\theta - 1 = 0$. Since $\theta$ takes values between $0^\circ$ and $180^\circ$, 3 this corresponds to the two solutions $\theta = 54.7^\circ, 125.3^\circ$. Both of these solutions are angular nodes. This is how they look like. 4 The dotted lines are the angular nodes. They are not planes, but rather cones. They correspond to one particular value of $\theta$, which in spherical coordinates is the angle made with the positive $z$-axis; I have marked these angles on the diagram. If you can obtain the forms of the wavefunctions, then it is easy to find the radial nodes. Mark Winter at Sheffield has a great website for this; just click on the orbital you want on the left, then "Equations" near the top-right corner. Notes and references 1 If we stick to complex atomic orbitals, which are simultaneous eigenfunctions of $H$, $L^2$ and $L_z$, then the dependency on $\phi$ is always of the form $e^{\pm im\phi}$, which can never be zero, so angular nodes never arise due to $\phi$-dependency. However, radial and angular nodes are most commonly discussed in the context of real atomic orbitals, obtained by linear combination of the spherical harmonics. These do have angular nodes that depend on both $\theta$ and $\phi$, but they are often much simpler to express in terms of Cartesian coordinates $(x,y,z)$ (obvious examples being the $\mathrm{2p}_x$ and $\mathrm{2p}_y$ orbitals). The $\mathrm{3d}_{z^2}$ orbital is an exception that was deliberately chose as an example because its angular nodes are not planes. 2 Griffiths, Introduction to Quantum Mechanics, 2nd ed., pp 139, 154. 3 Yes, I am using degrees, not radians. 4 Image source: own work. It was surprisingly difficult to find an appropriate picture online. If you vibrate a piece of rope then it is quite easy to show that nodes will appear, and when standing waves are produced the nodes are fixed in space and time. Between the nodes the rope oscillates up and down. The n$^{th}$ harmonic has $n-1$ nodes. The more nodes there are, the greater is the energy is required to produce them. In a vibrating rectangular membrane nodal lines are produced and in a vibrating disc nodal rings. The nodal lines, where the amplitude is zero separate normal vibrational modes. Similarly with atoms and molecules. Nodes are points where the wavefunction crosses zero, and its amplitude is zero. Where the wavefunction gradually approaches zero (at the origin or infinity) are not considered nodes. Nodes naturally appear in all solutions of the Schrodinger equation, even for simple systems such as a particle in a box. Not all wavefunctions have nodes, the lowest energy one does not, (e.g. the S orbital in atoms, zero point vibration and zero rotation in molecules, lowest MO in a molecule). The larger the number of nodes a wavefunction has the greater the energy eigenvalue. Some pictures of wavefunctions and nodes has been given in the answer by @Brian. There appears to be no special significance to a node, it arises purely out of the solution of the Schrodinger equation and is a consequence of the solutions (hence boundary conditions) we choose to obtain quantisation. Without nodes it is difficult to see how wavefunctions describing different energy eigenvalues could be constructed. Briefly extending the question to include bond formation. In forming bonds from atomic orbitals the parity of the wavefunction is important. Upon interchanging coordinates, i.e. inversion operator, odd parity (ungerade, u) produces -1 itself and even parity (gerade, g)leaves the orbital indistinguishable. The node in a p orbital causes it to have odd parity, as s orbital even. The symmetry (parity) determines whether two orbitals can combine to form a bonding orbital or anti-bonding one, e.g. $\sigma$ or $\sigma ^*$ or $\pi$ or $\pi ^*$. Anti-bonding orbitals always have nodes. Nodal planes are also important in Diels Alder addition and similar reactions as here symmetry is important. In spectroscopic transitions in molecules, the nodal structure of the molecular orbitals determine, via symmetry, which transitions are allowed or forbidden.
A peek through the imaginary looking glass Plymouth, December 2018 Before starting Thank you very much for the invitation Please ask questions Recalling projective geometry The set of lines through a point. Recalling projective geometry Parametrized by the points in a line. Recalling projective geometry Except for one. Recalling projective geometry Solution: add one point to the line The projective line is the line completed with one point at infinity. From Euclid: A line cuts a circle in two points: From Euclid: A line cuts a circle in two points: From Euclid: A line cuts a circle in two points... or not: Algebraicaly: \[(x-2)^2+y^2-2 = 0\] \[y=\lambda\cdot x\] Algebraicaly: \[(x-2)^2 + (\lambda \cdot x)^2 -2 = 0\] Algebraicaly: \[(\lambda^2+1)x^2 - 4x + 2 = 0\] Algebraicaly: \[x = \frac{2\pm\sqrt{2-2\lambda^2}}{\lambda^2+1}\] \(\lambda \leq 1\): real solutions \(\lambda \gt 1\): complex solutions Algebraicaly: Through the looking glass... to other dimensions Lines \(y = \lambda x\) Lines \(y = \lambda x\) Is homeomorphic to the \(x\) axis: the complex plane. Lines Conics Conics Conics Conics Conics Conics What happens around the discriminant? \(y = \sqrt{x}\) What happens around the discriminant? \(x = e^{2\pi i\theta}\) \(y = \pm e^{\pi i\theta}\) As \(x\) makes the full turn, \(y\) makes half a turn What happens around the discriminant? What happens around the discriminant? Stairs between the discriminant Stairs between the discriminant Stairs between the discriminant Stairs between the discriminant Projective conics Higher degrees: elliptic curves \(y^2 = x^3-x\) Higher degrees: elliptic curves \(y^2 = x^3-x\) Higher degrees: elliptic curves \(y^2 = x^3-x\) And so on... Theorem: a smooth complex projective curve of degree \(d\) has the topology of a surface of genus \(\frac{(d-1)(d-2)}{2}\) But... what about singular curves? Let's look closely \(x^3-y^2=0\) Let's look closely \(x^3-y^2=0\) Let's look closely \(x^3-y^2=0\) Local topology of singularities It is the topological cone over the intersection with the boundary of the Milnor ball So... what is this intersection? Local topology of singularities The boundary of the Milnor ball is \(\mathbb{S}^3\) The part of the complex curve inside the disk is a piece of surface The intersection with the boundary is the boundary of a piece of surface: a curve without boundary. Closed curves in \(\mathbb{S}^3\) With one component: knots In general: links Can be seen in \(\mathbb{R}^3\) by stereographic projection Milnor's trick: see them as a limit of smooth cases Milnor's trick: see them as a limit of smooth cases Pushing the Milnor fibre to \(\mathbb{S}^3\) Pushing the Milnor fibre to \(\mathbb{S}^3\) Milnor fibre Picard-Lefschetz theory Theorem: The singular curve is the result of collapsing some circles ( vanishing cycles) on the Milnor fibre. Zariski's point of view: braids
This is the final one of a series of posts about the manuscript “Finite Part of Operator K-theory for Groups Finitely Embeddable into Hilbert Space and the Degree of Non-rigidity of Manifolds” (ArXiv e-print 1308.4744. http://arxiv.org/abs/1308.4744) by Guoliang Yu and Shmuel Weinberger. In previous posts (most recently this one) I’ve described their main result about the assembly map, what I call the Finite Part Conjecture, and explained some of the methodology of the proof for the large class of groups that they call “finitely embeddable in Hilbert space”. Now I want to explain some of the consequences of the Finite Part Conjecture. Continue reading In my last post I mentioned a paper by Deeley and Goffeng whose aim is to construct a geometric counterpart of the Higson-Roe analytic surgery sequence. This week, there appeared on the arXiv a new paper by Piazza and Schick which gives a new construction of the natural transformation from the original DIFF surgery exact sequence of Browder-Novikov-Sullivan-Wall to our analytic surgery sequence. This is a counterpart to a slightly earlier paper by the same authors in which they carry out the same project for the Stolz exact sequence for positive scalar curvature metrics. In our original papers, Nigel and I made extensive use of Poincaré spaces – the key facts being that the “higher signatures” can be defined for such spaces, and that the mapping cylinder of a homotopy equivalence between manifolds is an example of a Poincaré space (with boundary). In fact, these observations can be used to prove the homotopy invariance of the higher signatures – this argument is the one that appears in the 1970s papers of Kasparov and Mischenko, essentially – and the natural transformation from geometric to analytic surgery should be thought of as a “quantification” of this homotopy invariance argument. Now there is a different argument for homotopy invariance, due to Hilsum and Skandalis, that has a more analytical feel. The point of the new Piazza-Schick paper is to “quantify” this argument in the same way that we did the Poincaré complex argument. This should lead to the same maps (or at least, to maps having the same properties – then one is faced with a secondary version of the “comparing assembly maps” question) in perhaps a more direct way. References Hilsum, Michel, and Georges Skandalis. “Invariance Par Homotopie de La Signature à Coefficients Dans Un Fibré Presque Plat.” Journal Fur Die Reine Und Angewandte Mathematik 423 (1992): 73–99. doi:10.1515/crll.1992.423.73. Kasparov, G.G. “K-theory, Group C*-algebras, and Higher Signatures (Conspectus).” In Proceedings of the 1993 Oberwolfach Conference on the Novikov Conjecture, edited by S. Ferry, A. Ranicki, and J. Rosenberg, 226:101–146. LMS Lecture Notes. Cambridge University Press, Cambridge, 1995. Mischenko, A.S. “Infinite Dimensional Representations of Discrete Groups and Higher Signatures.” Mathematics of the USSR — Izvestija 8 (1974): 85–111. Piazza, Paolo, and Thomas Schick. “Rho-classes, Index Theory and Stolz’ Positive Scalar Curvature Sequence.” arXiv:1210.6892 (October 25, 2012). http://arxiv.org/abs/1210.6892 ———. The Surgery Exact Sequence, K-theory and the Signature Operator. ArXiv e-print, September 17, 2013. http://arxiv.org/abs/1309.4370 In our Mapping surgery to analysis papers, Nigel and I proposed an analytic counterpart of the surgery exact sequence which summarizes the main results of the (Browder, Novikov, Sullivan, Wall) theory of high-dimensional manifolds. This exact sequence identifies the set of manifold structures within a given homotopy type \(X\) (the structure set) as the fiber of an assembly map \[ H_*(X; {\mathbb L}(e)) \to L_*({\mathbb Z}\pi_1(X)) \] which abstracts the ides of obtaining “signature obstructions” from a “surgery problem”. Analogously, we constructed an analytic structure set (actually the K-theory of a certain C*-algebra) as the fiber of a Baum-Connes type assembly map, and showed that index theory provides a natural transformation from the topological surgery exact sequence to our analytic surgery exact sequence. Our structure set is defined in purely analytic terms. However, in a subsequent paper where we related our exact sequence to the theory of \(\eta\) invariants, it became useful to have a more geometrical approach to the structure set also. (The relation between the “more geometrical” and “more analytical” approaches is roughly the same as that between the Baum-Douglas and Kasparov models of K-homology.) Our paper didn’t give a geometric definition of the structure set – just a geometric approach to certain elements. A recent arXiv paper by Deeley and Goffeng proposes to take this idea to its logical conclusion by constructing a Baum-Douglas type model for the whole analytic structure set. The basic idea is this: An element of the structure set should be “an elliptic operator together with a reason that its index vanishes”. The cobordism invariance of the index shows that one example of such a “reason” is that our elliptic operator is actually defined on the boundary of some manifold (and that our operator is a boundary operator). Therefore a first approximation to a Baum-Douglas model of the structure set should have as cycles spin-c manifolds with boundary \( (M,\partial M) \) together with maps \(\partial M \to X\). But of course this (cobordism) is not the only known reason for the vanishing of an index (e.g., as I understand it, the fundamental question about positive scalar curvature metrics is whether positive scalar curvature implies some bordism condition). So suppose you have an elliptic operator whose index vanishes for some “positive scalar curvature type” reason. How are you to build a structure class? It seems to me that Deeley-Goffeng deal with this by incorporating quite a lot of analysis into their geometric cycles – as well as the bordism that I have described, there are also projective module bundles over the group algebra, etc… this makes the desired exactness true, but perhaps at the cost of making the groups less geometrical; they are a “geometry-analysis hybrid”. And that is inevitable in this problem. I should mention that several other applications of the analytic surgery sequence depend on constructing an appropriate ncie model for the structure set: e.g. Siegel, Xie-Yu (see below). I’m not sure whether our original model is “nice” for anybody! References Deeley, Robin, and Magnus Goffeng. Realizing the Analytic Surgery Group of Higson and Roe Geometrically, Part I: The Geometric Model. ArXiv e-print, August 27, 2013. http://arxiv.org/abs/1308.5990. Higson, Nigel, and John Roe. “Mapping Surgery to Analysis. I. Analytic Signatures.” K-Theory. An Interdisciplinary Journal for the Development, Application, and Influence of K-Theory in the Mathematical Sciences 33, no. 4 (2005): 277–299. doi:10.1007/s10977-005-1561-8. ———. “Mapping Surgery to Analysis. II. Geometric Signatures.” K-Theory. An Interdisciplinary Journal for the Development, Application, and Influence of K-Theory in the Mathematical Sciences 33, no. 4 (2005): 301–324. doi:10.1007/s10977-005-1559-2. ———. “Mapping Surgery to Analysis. III. Exact Sequences.” K-Theory. An Interdisciplinary Journal for the Development, Application, and Influence of K-Theory in the Mathematical Sciences 33, no. 4 (2005): 325–346. doi:10.1007/s10977-005-1554-7. Higson, Nigel, and John Roe. “\(K\)-homology, Assembly and Rigidity Theorems for Relative Eta Invariants.” Pure and Applied Mathematics Quarterly 6, no. 2, Special Issue: In honor of Michael Atiyah and Isadore Singer (2010): 555–601. Siegel, Paul. “The Mayer-Vietoris Sequence for the Analytic Structure Group.” arXiv:1212.0241 (December 2, 2012). http://arxiv.org/abs/1212.0241. Siegel, Paul. “Homological Calculations with the Analytic Structure Group.” PhD Thesis, Penn State, 2012. https://etda.libraries.psu.edu/paper/16113/. Xie, Zhizhang, and Guoliang Yu. “A Relative Higher Index Theorem, Diffeomorphisms and Positive Scalar Curvature.” arXiv:1204.3664 (April 16, 2012). http://arxiv.org/abs/1204.3664. Xie, Zhizhang, and Guoliang Yu. “Positive Scalar Curvature, Higher Rho Invariants and Localization Algebras.” arXiv:1302.4418 (February 18, 2013). http://arxiv.org/abs/1302.4418. In 1996 I was the Ulam Visiting Professor at the University of Colorado, Boulder. While I was there I gave a series of graduate lectures on high-dimensional manifold theory, which I whimsically titled Surgery for Amateurs. The title was supposed to express that I was coming to the subject from outside – basically, trying to answer to my own satisfaction the question “What is this Novikov Conjecture you keep talking about?” Perhaps because of their amateurish nature, though, these lectures struck a chord, and I have received many requests for reprints of the lecture notes. In 2004 I began a project of revising them with the help of Andrew Ranicki; but, alas, other parts of life intervened, and the proposed book never got finished. Obviously some people still value the material, and my plan is to try and republish it in blog form, along with comments and discussion. The Surgery for Amateurs blog is now live and your participation is welcomed!
Sometimes finding the volume of a solid of revolution using the disk or washer method is difficult or impossible. For example, consider the solid obtained by rotating the region bounded by the line \(y = 0\) and the curve \(y = {x^2}-{x^3}\) about the \(y-\)axis. The cross section of the solid of revolution is a washer. However, in order to use the washer method, we need to convert the function \(y = {x^2} – {x^3}\) into the form \(x = f\left( y \right),\) which is not easy. In such cases, we can use the different method for finding volume called the method of cylindrical shells. This method considers the solid as a series of concentric cylindrical shells wrapping the axis of revolution. With the disk or washer methods, we integrate along the coordinate axis parallel to the axes of revolution. With the shell method, we integrate along the coordinate axis perpendicular to the axis of revolution. As before, we consider a region bounded by the graph of the function \(y = f\left( x \right),\) the \(x-\)axis, and the vertical lines \(x = a\) and \(x = b,\) where \(0 \le a \lt b.\) The volume of the solid obtained by rotating the region about the \(y-\)axis is given by the integral \[V = 2\pi \int\limits_a^b {xf\left( x \right)dx},\] where \(2\pi x\) means the circumference of the elementary shell, \({f\left( x \right)}\) is the height of the shell, and \(dx\) is its thickness. If a region is bounded by two curves \(y = f\left( x \right)\) and \(y = g\left( x \right)\) on an interval \(\left[ {a,b} \right],\) where \(0 \le g\left( x \right) \le f\left( x \right),\) then the volume of the solid obtained by rotating the region about the \(y-\)axis is expressed by the integral of the difference of two functions: \[V = 2\pi \int\limits_a^b {x\left[ {f\left( x \right) – g\left( x \right)} \right]dx} .\] We can easily modify these formulas if a solid is formed by rotating around the \(x-\)axis. The two formulas listed above become If the region is bounded by a curve and \(y-\)axis: \[V = 2\pi \int\limits_c^d {yf\left( y \right)dy}; \] If the region is bounded by two curves: \[V = 2\pi \int\limits_c^d {y\left[ {f\left( y \right) – g\left( y \right)} \right]dy} .\] Suppose now that the region bounded by a curve \(y = f\left( x \right)\) and the \(x-\)axis on the interval \(\left[ {a,b} \right]\) is rotating around the vertical line \(x = h.\) In this case, we can apply the following formulas for finding the volume of the solid of revolution: \[{V \text{ = }}\kern0pt{\left\{ {\begin{array}{*{20}{l}} {2\pi \int\limits_a^b {\left( {x – h} \right)f\left( x \right)dx} ,\text{ if } h \le a \lt b}\\ {2\pi \int\limits_a^b {\left( {h – x} \right)f\left( x \right)dx} ,\text{ if } a \lt b \le h} \end{array}} \right.}\] Similarly, if the region bounded by a curve \(x = f\left( y \right)\) and the \(y-\)axis on the interval \(\left[ {c,d} \right]\) is rotating around the horizontal line \(y = m,\) then the volume of the obtained solid is given by \[{V \text{ = }}\kern0pt{\left\{ {\begin{array}{*{20}{l}} {2\pi \int\limits_c^d {\left( {y – m} \right)f\left( y \right)dy} ,\text{ if } m \le c \lt d}\\ {2\pi \int\limits_c^d {\left( {m – y} \right)f\left( y \right)dy} ,\text{ if } c \lt d \le h} \end{array}} \right.}\] Now let’s return to the example mentioned above and find the volume of the solid using the shell method. The cubic curve \(y = {x^2} – {x^3}\) intersects the \(x-\)axis at the points \(x = 0\) and \(x = 1.\) These will be the limits of integration. Then, the volume of the solid is \[{V = 2\pi \int\limits_a^b {xf\left( x \right)dx} }={ 2\pi \int\limits_0^1 {x\left( {{x^2} – {x^3}} \right)dx} }={ 2\pi \int\limits_0^1 {\left( {{x^3} – {x^4}} \right)dx} }={ 2\pi \left. {\left( {\frac{{{x^4}}}{4} – \frac{{{x^5}}}{5}} \right)} \right|_0^1 }={ 2\pi \left( {\frac{1}{4} – \frac{1}{5}} \right) }={ \frac{\pi }{{10}}.}\] Solved Problems Click a problem to see the solution. Example 1Find the volume of the solid obtained by rotating about the \(y-\)axis the region bounded by the curve \(y = 3{x^2} – {x^3}\) and the line \(y = 0.\) Example 2Find the volume of the solid obtained by rotating the sine function between \(x = 0\) and \(x = \pi\) about the \(y-\)axis. Example 3Calculate the volume of the solid obtained by rotating the cosine function between \(x = 0\) and \(x = \large{\frac{\pi }{2}}\normalsize\) about the \(y-\)axis. Example 4The region bounded by the parabola \(x = {\left( {y – 1} \right)^2}\) and coordinate axes rotates around the \(x-\)axis. Find the volume of the obtained solid of revolution. Example 5Calculate the volume of a sphere of radius \(R\) using the shell integration. Example 6Find the volume of the solid obtained by rotating about the \(y-\)axis the region bounded by the curve \(y = {e^{ – x}},\) where \(0 \le x \lt \infty \) and the horizontal line \(y = 0.\) Example 7Find the volume of the solid formed by rotating about the line \(x = -1\) the region bounded by the parabola \(y = {x^2}\) and the lines \(y = 0,\) \(x = 1.\) Example 8The region bounded by curves \(y = {x^2}\) and \(y = \sqrt x \) rotates around the \(y-\)axis. Find the volume of the solid of revolution. Example 9The region is bounded by the hyperbola \(y = \large{\frac{1}{x}}\normalsize\) and the line \(y = \large{\frac{5}{2}}\normalsize – x.\) Find the volume of the solid formed by rotating the region about the \(y-\)axis. Example 10Find the volume of the solid obtained by rotating about the line \(y = 2\) the region bounded by the curve \(x = 1 + {y^2}\) and the lines \(y = 0,\) \(y = 1.\) Example 1.Find the volume of the solid obtained by rotating about the \(y-\)axis the region bounded by the curve \(y = 3{x^2} – {x^3}\) and the line \(y = 0.\) Solution. By solving the equation \(3{x^2} – {x^3} = 0\) we find the limits of integration: \[{3{x^2} – {x^3} = 0,}\;\; \Rightarrow {{x^2}\left( {3 – x} \right) = 0, }\;\;\Rightarrow {{x_1} = 0,\;}\kern0pt{{x_2} = 3.}\] By the shell method, \[{V = 2\pi \int\limits_a^b {xf\left( x \right)dx} }={ 2\pi \int\limits_0^3 {x\left( {3{x^2} – {x^3}} \right)dx} }={ 2\pi \int\limits_0^3 {\left( {3{x^3} – {x^4}} \right)dx} }={ 2\pi \left. {\left( {\frac{{3{x^4}}}{4} – \frac{{{x^5}}}{5}} \right)} \right|_0^3 }={ 2\pi \cdot {3^5}\left( {\frac{1}{4} – \frac{1}{5}} \right) }={ \frac{{243\pi }}{{10}}.}\] Example 2.Find the volume of the solid obtained by rotating the sine function between \(x = 0\) and \(x = \pi\) about the \(y-\)axis. Solution. Using the shell method and integrating by parts, we have \[{V = 2\pi \int\limits_0^\pi {x\sin xdx} }={ \left[ {\begin{array}{*{20}{l}} {u = x}\\ {dv = \sin xdx}\\ {du = dx}\\ {v = – \cos x} \end{array}} \right] }={ 2\pi \left\{ {\left. {\left[ { – x\cos x} \right]} \right|_0^\pi – \int\limits_0^\pi {\left( { – \cos x} \right)dx} } \right\} }={ 2\pi \left\{ {\left. {\left[ { – x\cos x} \right]} \right|_0^\pi + \int\limits_0^\pi {\cos xdx} } \right\} }={ 2\pi \left\{ {\left. {\left[ { – x\cos x} \right]} \right|_0^\pi + \left. {\left[ {\sin x} \right]} \right|_0^\pi } \right\} }={ 2\pi \left. {\left[ {\sin x – x\cos x} \right]} \right|_0^\pi }={ 2\pi \left[ {0 – \pi \cdot \left( { – 1} \right) – 0} \right] }={ 2{\pi ^2}.}\] Example 3.Calculate the volume of the solid obtained by rotating the cosine function between \(x = 0\) and \(x = \large{\frac{\pi }{2}}\normalsize\) about the \(y-\)axis. Solution. Using the method of cylindrical shells and integrating by parts, we get \[{V = 2\pi \int\limits_0^{\frac{\pi }{2}} {x\cos xdx} }={ \left[ {\begin{array}{*{20}{l}} {u = x}\\ {dv = \cos xdx}\\ {du = dx}\\ {v = \sin x} \end{array}} \right] }={ 2\pi \left\{ {\left. {\left[ {x\sin x} \right]} \right|_0^{\frac{\pi }{2}} – \int\limits_0^{\frac{\pi }{2}} {\sin xdx} } \right\} }={ 2\pi \left\{ {\left. {\left[ {x\sin x} \right]} \right|_0^{\frac{\pi }{2}} + \left. {\left[ {\cos x} \right]} \right|_0^{\frac{\pi }{2}}} \right\} }={2\pi \left. {\left[ {x\sin x + \cos x} \right]} \right|_0^{\frac{\pi }{2}} }={ 2\pi \left[ {\left( {\frac{\pi }{2} \cdot 1 + 0} \right) – \left( {0 + 1} \right)} \right] }={ \pi \left( {\pi – 2} \right).}\] Example 4.The region bounded by the parabola \(x = {\left( {y – 1} \right)^2}\) and coordinate axes rotates around the \(x-\)axis. Find the volume of the obtained solid of revolution. Solution. We can use the shell method to calculate the volume of the given solid. The region here rotates around the \(x-\)axis, so the volume is defined by the formula \[V = 2\pi \int\limits_c^d {yx(y)dy} ,\] assuming the variable \(y\) varies from \(c\) to \(d.\) In our case, \(c = 0,\) \(d = 1,\) \(x\left( y \right) = {\left( {y – 1} \right)^2}.\) Hence, \[{V = 2\pi \int\limits_0^1 {y{{\left( {y – 1} \right)}^2}dy} }={ 2\pi \int\limits_0^1 {y\left( {{y^2} – 2y + 1} \right)dy} }={ 2\pi \int\limits_0^1 {\left( {{y^3} – 2{y^2} + y} \right)dy} }={ 2\pi \left. {\left[ {\frac{{{y^4}}}{4} – \frac{{2{y^3}}}{3} + \frac{{{y^2}}}{2}} \right]} \right|_0^1 }={ 2\pi \left( {\frac{1}{4} – \frac{2}{3} + \frac{1}{2}} \right) }={ \frac{\pi }{6}.}\] Example 5.Calculate the volume of a sphere of radius \(R\) using the shell integration. Solution. We consider the region which occupies the part of the disk \({x^2} + {y^2} = {R^2}\) in the first quadrant. Taking the integral \(2\pi \int\limits_0^R {x\sqrt {{R^2} – {x^2}} dx} \) and multiplying the result by two, we find the volume of the sphere. It’s convenient to change the variable. Let \({R^2} – {x^2} = {t^2},\) so \(xdx=-tdt.\) When \(x = 0,\) then \(t = R,\) and when \(x = R,\) then \(t = 0.\) Hence, the integral becomes \[{V = 4\pi \int\limits_0^R {x\sqrt {{R^2} – {x^2}} dx} }={ 4\pi \int\limits_R^0 {t\left( { – tdt} \right)} }={ 4\pi \int\limits_0^R {{t^2}dt} }={ \left. {\frac{{4\pi {t^3}}}{3}} \right|_0^R }={ \frac{{4\pi {R^3}}}{3}.}\] Example 6.Find the volume of the solid obtained by rotating about the \(y-\)axis the region bounded by the curve \(y = {e^{ – x}},\) where \(0 \le x \lt \infty \) and the horizontal line \(y = 0.\) Solution. The volume of the solid is expressed by the improper integral \[V = 2\pi \int\limits_0^\infty {x{e^{ – x}}dx} .\] To take the integral we use integration by parts: \[{V = 2\pi \int\limits_0^\infty {x{e^{ – x}}dx} }={ \left[ {\begin{array}{*{20}{l}} {u = x}\\ {dv = {e^{ – x}}dx}\\ {du = dx}\\ {v = – {e^{ – x}}} \end{array}} \right] }={ 2\pi \left\{ {\left. {\left[ { – x{e^{ – x}}} \right]} \right|_0^\infty – \int\limits_0^\infty {\left( { – {e^{ – x}}} \right)dx} } \right\} }={ 2\pi \left\{ {\left. {\left[ { – x{e^{ – x}}} \right]} \right|_0^\infty + \int\limits_0^\infty {{e^{ – x}}dx} } \right\} }={ 2\pi \left\{ {\left. {\left[ { – x{e^{ – x}}} \right]} \right|_0^\infty – \left. {\left[ {{e^{ – x}}} \right]} \right|_0^\infty } \right\} }={ – 2\pi \left. {\left[ {{e^{ – x}}\left( {x + 1} \right)} \right]} \right|_0^\infty .}\] We replace the infinite upper limit with a finite value \(b\) and then take the limit as \(b \to \infty.\) \[{V = – 2\pi \lim\limits_{b \to \infty } \left. {\left[ {{e^{ – x}}\left( {x + 1} \right)} \right]} \right|_0^b }={ – 2\pi \lim\limits_{b \to \infty } \left[ {{e^{ – b}}\left( {b + 1} \right) – {e^0}\left( {0 + 1} \right)} \right] }={ 2\pi \lim\limits_{b \to \infty } \left[ {1 – \frac{{b + 1}}{{{e^b}}}} \right].}\] By L’Hopital’s Rule, \[{\lim \limits_{b \to \infty } \frac{{b + 1}}{{{e^b}}} }={\lim \limits_{b \to \infty } \frac{{\left( {b + 1} \right)’}}{{\left( {{e^b}} \right)’}} }={ \lim \limits_{b \to \infty } \frac{1}{{{e^b}}} }={ 0.}\] So, the volume of the solid is \(V = 2\pi .\) Example 7.Find the volume of the solid formed by rotating about the line \(x = -1\) the region bounded by the parabola \(y = {x^2}\) and the lines \(y = 0,\) \(x = 1.\) Solution. The curve is rotating about the line \(x = -1,\) which is on the left from the interval of integration \(\left[ {0,1} \right].\) Therefore, we use the following formula for calculating the volume: \[V = 2\pi \int\limits_a^b {\left( {x – h} \right)f\left( x \right)dx} ,\] where \(h = -1,\) \(a = 0,\) \(b = 1.\) So, we have \[{V = 2\pi \int\limits_0^1 {\left( {x + 1} \right){x^2}dx} }={ 2\pi \int\limits_0^1 {\left( {{x^3} + {x^2}} \right)dx} }={ 2\pi \left. {\left[ {\frac{{{x^4}}}{4} + \frac{{{x^3}}}{3}} \right]} \right|_0^1 }={ 2\pi \left( {\frac{1}{4} + \frac{1}{3}} \right) }={ \frac{{7\pi }}{6}.}\] Example 8.The region bounded by curves \(y = {x^2}\) and \(y = \sqrt x \) rotates around the \(y-\)axis. Find the volume of the solid of revolution. Solution. Obviously, both curves intersect at the points \(x = 0\) and \(x = 1,\) so using the shell method we will integrate from \(0\) to \(1\). The region here is bounded by two curves. Therefore the integration formula is written in the form similar to the washer method: \[V = 2\pi \int\limits_a^b {x\left[ {f\left( x \right) – g\left( x \right)} \right]dx} .\] Substituting \(a = 0,\) \(b= 1,\) \(f\left( x \right) = \sqrt x,\) \(g\left( x \right) = {x^2},\) we obtain: \[{V = 2\pi \int\limits_0^1 {x\left[ {\sqrt x – {x^2}} \right]dx} }={ 2\pi \int\limits_0^1 {\left[ {{x^{\frac{3}{2}}} – {x^3}} \right]dx} }={ 2\pi \left. {\left[ {\frac{{2{x^{\frac{5}{2}}}}}{5} – \frac{{{x^4}}}{4}} \right]} \right|_0^1 }={ 2\pi \left( {\frac{2}{5} – \frac{1}{4}} \right) }={ \frac{{3\pi }}{{10}}.}\] Compare the result with Example \(3\) on the page Volume of a Solid of Revolution: Disks and Washers. Example 9.The region is bounded by the hyperbola \(y = \large{\frac{1}{x}}\normalsize\) and the line \(y = \large{\frac{5}{2}}\normalsize – x.\) Find the volume of the solid formed by rotating the region about the \(y-\)axis. Solution. First we determine the points of intersection of both curves: \[{\frac{1}{x} = \frac{5}{2} – x,}\;\; \Rightarrow {{x^2} – \frac{5}{2}x + 1 = 0,}\;\; \Rightarrow {2{x^2} – 5x + 2 = 0,}\;\; \Rightarrow {D = {\left( { – 5} \right)^2} – 4 \cdot 2 \cdot 2 = 9,}\;\; \Rightarrow {{x_{1,2}} = \frac{{5 \pm \sqrt 9 }}{4} }={ \frac{1}{2},\,2.}\] We use the shell method for finding the volume. As the region is bounded by two curves, the integration formula is given by \[V = 2\pi \int\limits_a^b {x\left[ {f\left( x \right) – g\left( x \right)} \right]dx} .\] Here \(a = \large{\frac{1}{2}}\normalsize,\) \(b = 2,\) \(f\left( x \right) = \large{\frac{5}{2}}\normalsize – x,\) \(g\left( x \right) = \large{\frac{1}{x}}\normalsize.\) So the volume of the solid is \[{V = 2\pi \int\limits_{\frac{1}{2}}^2 {x\left[ {\frac{5}{2} – x – \frac{1}{x}} \right]dx} }={ 2\pi \int\limits_{\frac{1}{2}}^2 {\left[ {\frac{{5x}}{2} – {x^2} – 1} \right]dx} }={ 2\pi \left. {\left[ {\frac{{5{x^2}}}{4} – \frac{{{x^3}}}{3} – x} \right]} \right|_{\frac{1}{2}}^2 }={ 2\pi \left[ {\left( {5 – \frac{8}{3} – 2} \right) }\right.}-{\left.{ \left( {\frac{5}{{16}} – \frac{1}{{24}} – \frac{1}{2}} \right)} \right] }={ 2\pi \left[ {\frac{7}{2} – \frac{8}{3} – \frac{5}{{16}} + \frac{1}{{24}}} \right] }={ 2\pi \cdot \frac{9}{{16}} }={ \frac{{9\pi }}{8}}\] Example 10.Find the volume of the solid obtained by rotating about the line \(y = 2\) the region bounded by the curve \(x = 1 + {y^2}\) and the lines \(y = 0,\) \(y = 1.\) Solution. The given figure rotates around the horizontal axis \(y = m = 2.\) To determine the volume of the solid of revolution, we apply the shell integration formula in the form \[V = 2\pi \int\limits_c^d {\left( {m – y} \right)x\left( y \right)dy} .\] In this case, \(m = 2,\) \(c = 0,\) \(d = 1,\) \(x\left( y \right) = 1 + {y^2}.\) This yields \[{V = 2\pi \int\limits_0^1 {\left( {2 – y} \right)\left( {1 + {y^2}} \right)dy} }={ 2\pi \int\limits_0^1 {\left( {2 – y + 2{y^2} – {y^3}} \right)dy} }={ 2\pi \left. {\left[ {2y – \frac{{{y^2}}}{2} + \frac{{2{y^3}}}{3} – \frac{{{y^4}}}{4}} \right]} \right|_0^1 }={ 2\pi \left[ {2 – \frac{1}{2} + \frac{2}{3} – \frac{1}{4}} \right] }={ \frac{{23\pi }}{6}.}\]
Consider a Dp-brane. Compactify $d$ spatial dimensions over a torus $T^d$. Suppose $d\geqslant p$, and that the Dp-brane is completely wrapped around the compactified dimensions. Look at the open string modes ending on this wrapped D-brane. There is a zero energy open string mode associated with each spatial dimension. That corresponds to the orientation of the worldsheet field excitation. If the direction is along an uncompactified spatial dimension, that corresponds to quanta of brane displacements along that direction. These cases needn't concern us. If the direction is normal to the brane but lies along a compactified dimension, that corresponds to quanta of brane displacements along that direction. If it's tangent to the brane, it corresponds to quanta of the Wilson line of the brane gauge field along that wrapped direction. Here's the question. Suppose the wrapped D-brane has a total mass of $M$. Suppose there is a compactified spatial dimension of radius $R$ along which the brane isn't wrapped, i.e. $p<d$. The brane has Kaluza-Klein momenta along that direction of value $n/R$ where n is integral. The energy spectrum is given by $$\sqrt {M^2 +n^2/R^2} ~\approx~ M + \frac{n^2}{2MR^2} +\mathcal{O}(M^{-3}).$$ A condensate of zero energy open strings with orientation along that direction ought to give a continuous moduli? Why is there no moduli then, and why is the energy spectrum discretized? Or consider a direction in which the brane is wrapped. We ought to have a continuous modulus of Wilson line of the brane gauge field along that dimension? Once again, we have discretization. Why? Anyway, how can a completely wrapped D-brane have KK momentum? In the string worldsheet picture, we have open strings ending at the D-brane background at a fixed position. Along those compactified dimensions along which the brane isn't wrapped, we have Dirichlet boundary conditions. Such open strings can only have winding numbers, but no KK momentum either. Even then, we're dealing with a condensate of open strings with zero winding number. This discretization doesn't occur if the brane remains unwrapped under at least two uncompactified spatial dimensions because the brane now has infinite mass. If it remains unwrapped only along one uncompactified spatial dimension, there's the Mermin-Wagner theorem, meaning there’s no fixed brane position or Wilson line. PS: Maybe this question can be rephrased in terms of BPS. We have a wrapped BPS brane. But nonperturbatively, somehow, the BPS state has to be delocalized along the compactified dimension or its Wilson line in a superposition over all possible values? Open strings with no energy are also BPS. So, we can have any condensate of them and still remain BPS? This clearly isn't the case with a lifted modulus, and a discretization of the energy spectrum. PPS: How do you even express the KK modes of the D-brane, or the dual to its Wilson lines, in terms of a condensate of open strings? Suppose you have the lowest energy state with zero KK momentum. Then, disregarding transverse displacements along the uncompactified spatial dimensions, there is an energy gap to the next energy state. However, a condensate of open strings with internal mode excitations along the compactified dimensions would naively give no energy gap. PPPS: Is the number of open string modes corresponding to these "disappearing moduli" even a well-defined operator? This is precisely because at the perturbative level, such open string modes have no energy. If it's not a well defined operator, just how do you even express this in terms of open string worldsheets?
Suppose that $\mathcal{L}$ is the language of a simply typed lambda calculus of two base types, $e$ and $t$, with infinitely many constants at each type. A substitution $j$ is a mapping from constants of type $\sigma$ to arbitrary closed $\mathcal{L}$-terms of type $\sigma$. $j$ extends to a mapping on arbitrary closed terms of type $\sigma$ in the obvious way: $j\alpha$ is the result of substituting the constants in $\alpha$ with their $j$ counterparts. Now suppose that for each constant $a:e$ there is a term $\phi^a: t$, and suppose moreover that: For any substitution, $j$, if $ja = jb$ then $j\phi^a =_{\eta\beta} j\phi^b$ Does it follow that there exists a closed term $\alpha:e\to t$ such that $\phi^a =_{\eta\beta} \alpha a$ for each constant $a$?
Expand $f(x) = \log(1 + x)$ around $x = 0$ to all orders. More precisely, find $a_n$ such that for any positive integer $N$, we have$$f(x) = \left(\sum_{n=0}^{N-1} a_nx^n\right) + E_N(x) \text{ for all }\left|x\right| < {1\over2},$$where $\left|E_N(x)\right| \le C_N\left|x\right|^N$ for $\left| x\right| \le 1/2$. How does the constant $C_N$ depend on $N$? How do I see that we have an infinite Taylor expansion$$f(x) = \sum_{n=0}^\infty a_nx^n \text{ for all }\left|x\right| < {1\over2}?$$What is the largest interval of validity of this series representation? Can we extend to $\left|x\right| < 1$ or beyond? Expand $f(x) = \log(1 + x)$ around $x = 0$ to all orders. More precisely, find $a_n$ such that for any positive integer $N$, we have$$f(x) = \left(\sum_{n=0}^{N-1} a_nx^n\right) + E_N(x) \text{ for all }\left|x\right| < {1\over2},$$where $\left|E_N(x)\right| \le C_N\left|x\right|^N$ for $\left| x\right| \le 1/2$. How does the constant $C_N$ depend on $N$? How do I see that we have an Here is an outline of an approach: First, show (induction works well) that the bound on $E_n$ gives $a_n = {1 \over n!} f^{(n)}(0)$. In particular, the $a_n$ are unique, so we can use any technique to find them. Note that since $f$ is only defined on $(-1,\infty)$ (in particular, $\lim_{x \downarrow -1} f(x) = -\infty$), the largest radius of convergence can be is bounded by $1$ (since $|0-(-1)| = 1$). Second, note that $f'(x) = {1 \over 1+x}$, and note that if $|x|<1$, then $f'(x) = 1-x+x^2-x^3+\cdots$. Furthermore, convergence is uniform on any compact subset of $(-1,1)$ (Weierstrass M-test), and hence we can exchange integration and summation to see that $f(x) = x-{1 \over 2}x^2+{1 \over 3} x^3-{{1 \over 4} x^4 + \cdots}$. It follows from this that $a_0 = 0, a_n = (-1)^{n+1}{1 \over n}$, for $n>0$. Then $$E_N(x) = \sum_{n=N}^\infty (-1)^{n+1}{1 \over n} x^n = x^N \sum_{n=N}^\infty (-1)^{n+1}{1 \over n} x^{n-N} = x^N \sum_{n=0}^\infty (-1)^{n+1+N}{1 \over {n+N}} x^{n}$$ It follows that $C_N = \sup_{|x|< {1 \over 2}} |\sum_{n=0}^\infty (-1)^{n+1+N}{1 \over {n+N}} x^{n}| = \sum_{n=0}^\infty {1 \over {n+N}} {1 \over 2^{n}}$. An immediate closed form expression for $C_N$ is not clear to me, but it is easy to see the estimate $C_N \le {2 \over N}$. In fact, if we take $|x| < R$, where $R < 1$, we can form a bound $C_N \le {K \over N}$, where $K$ is independent of $N$. In particular, we have $\sup_{|x|<R} |E_N(x)| \le {K \over N} R^N$ Hence we see that for any $R<1$, we have$\lim_{N \to \infty} \sup_{|x|<R} |E_N(x)| = 0$, and hence the Taylor series approximation converges uniformly to $f(x)$ for $|x|<R$. We know that $\text{log}$ is the inverse function to $g(y) = e^y$. We have that $g$ takes only positive values and is continuous, so $g$ is bounded away from $0$ on $[-T, T]$ for all $T > 0$. Consequently, the inverse function $\log$ must be unbounded in any neighborhood of $0$, and so the power series for $\log(1 + x)$ can not converge to a continuous function on any neighborhood of $x = -1$. Thus, our best hope is to have a series converging to $\log(1 + x)$ for $\left|x\right| < 1$. In fact, it will be continuous on $(-1, 1]$, though we will not worry about $x = 1$. Now, since$$g'(y) = g(y) > 0$$for all $y$, we have that $g$ is strictly increasing, so if $g(y_n) \to g(y_0)$, we must have $y_n \to y_0$. Thus, if $g(y_0) = x_0$, $h = g^{-1}$, and $g(y_n) = x_n$ with $y_n \to y_0$, we have$$\lim_{n \to \infty} \left({{h(x_n) - h(x_0)}\over{x_n - x_0}}\right)\left({{g(y_n) - g(y_0)}\over{y_n - y_0}}\right) = 1.$$Since$${{g(y_n) - g(y_0)}\over{y_n - y_0}} \to g(y) \neq 0,$$we get$$\lim_{n \to \infty} {{h(x_0) - h(x_0)}\over{x_n - x_0}} = {1\over{g(y)}} = {1\over{x_0}}.$$This is true for any sequence $x_n \to x_0$, so we conclude $g'(x_0) = 1/x_0$. It is clear that $g(y) \to \infty$ as $y \to \infty$, so $g(-y) \to 0$ as $y \to \infty$ by the functional equation. But $g(y) >0$ for all $y \in \mathbb{R}$, so by the Intermediate Value Theorem, we conclude $g$ has range $(0, \infty)$, and $g$ is injective as $g' > 0$ and the Mean Value Theorem imply $g$ is $1$-$1$. So $h = g'$ is defined on $(0, \infty)$, and we see $h'(x) = 1/x$ for all $x$. We have that $h(1) = 0$, so$$h(x) = \int_1^x {{dt}\over{t}}.$$Thus,$$f(x) = \int_1^x {{dt}\over{1 + t}}.$$Suppose $\left|x\right| < 1$. Then on $[-\left|x\right|,\left|x\right|]$, the power series for $1/(1 + t)$ converges uniformly, so we may interchange limits, as follows:$$f(x) = \left(\int_0^x \left(\lim_{n \to \infty} \sum_{k=0}^n (-t)^k\right)dt\right) + f(0)$$$$= \left(\lim_{n \to \infty} \int_0^x \left(\sum_{k=0}^n (-t)^k\right) dt\right) + 0$$$$= \lim_{n \to \infty} \left(-{{\sum_{k=1}^{n+1} (-x)^k}\over{k}}\right)$$$$= -\sum_{k=1}^\infty (-x)^k,$$and the series converges to $\log(x)$ for $\left|x\right| < 1$. Note that uniform convergence was needed to change the order of the limit and the integral.
Star forming regions Massive stars are born in warm molecular clouds; because of their size, they shape the environment around them through processes such as photoionisation, stellar winds and supernovae. It may also be the case that massive stars trigger or terminate the formation of less massive stars. are born in cold, dark molecular clouds, Bok Globules and maybe also near massive stars. Recently, it seems that low-mass stars are born in cluster-like environments; however, it seems that these clusters do not survive as most low-mass stars are not found in clusters. Low mass stars Stellar collapse (low mass): cool molecular H 2 cores collapse when their mass exceeds the Jeans mass The collapse can be triggered by loss of magnetic support, collision with other cores or compression due to nearby supernovae. The collapse is isothermal (and ‘inside out’) so there is efficient radiation of energy from ~10 6R O to ~5R O over a timescale of 10 5-10 6 yr. The collapse stops when the material becomes optically thick and can no longer remain isothermal. This is a protostar. Conserving angular momentum: to ‘store’ the angular momentum once the star has formed, stars do not generally form on their own. Instead, most stars will form together with planetary systems or stellar companions. Note that stars start off by rotating rapidly; stars like the Sun are slowed by magnetic braking. Pre-main sequence phase: when T c~10 6K, an accreting protostar will start burning deuterium. When deuterium burning is finished, the star contracts until Treaches ~10 c 7K, at which point hydrogen burning starts and the main sequence lifetime begins. White dwarfs are the endpoints of stars with M≤1.4M O. Supported by electron degeneracy pressure, they are usually made of carbon and oxygen (although there are some He and O-Ne-Mg stars). White dwarf mass-radius relations (see also degeneracy on the previous page) Non-relativistic degeneracy R decreases with increasing mass; ρ increases with M. Relativistic degeneracy- M independent of R. There is a maximum mass beyond which collapse occurs. We will find out what it is below. Chandrasekhar mass (limiting mass for white dwarfs): consider a star, radius R, containing N fermions of mass (as always, μ is mean molecular mass, in this case per fermion). Number density ; volume per fermion~1/ n Heisenber uncertainty principle Relativistic Fermi energy GPE per fermion where Total energy per particle The stable configuration is where E is minimised. If E<0, E can be decreased without bound by decreasing R– therefore there is no stable equilibrium point and the star undergoes gravitational collapse. The point where E=0 gives the maximum value of N for which there is no collapse. With rigorous calculation, we get – this is the Chandrasekhar mass. Evolution of massive stars (M≥1.3M O): we already mentioned that more massive stars can continue to fuse heavier elements up to iron. The core undergoes alternate burning and contraction phases until an iron core is formed, with fusion of lighter elements occurring in onion-like shells. Carbon burning (T~6×10 8K) C -> Ne, Na, Mg Oxygen burning (T~10 9K) O -> Si, P, S, Mg Silicon burning Si -> Fe Core collapse supernovae are triggered after the exhaustion of nuclear fuel in the core of a massive star if the iron core mass > Chandrasekhar mass. Gravitational energy from the collapsing core provides energy for the explosion, most of which is emitted in the form of neutrinos. By an unknown mechanism, ~1% of the energy is deposited in the stellar envelope, which is blown away (ejected), leaving a compact remnant (neutron star or black hole). Thermonuclear explosions: if an accreting CO white dwarf reaches the Chandrasekhar mass, the carbon is ignited under degenerate conditions. The nuclear burning raises T, but not P, and just as with the helium flash we get thermonuclear runaway. This results in incineration and complete destruction of the star; with 10 44 J of nuclear energy release, no remnant is expected. These explosions are the main producers of iron and also act as standard candles (which we discussed in the Cosmology section). Supernova classification: there are many different types of supernova, and if truth be told, not even the experts can decide how to classify them. For that reason, only a general overview will be given here. Type I: no hydrogen lines in spectrum Type II: hydrogen lines in spectrum Theoretical types: the thermonuclear and core collapse models discussed above. The relationship between these types is no longer 1:1. Neutron stars are the end products of core collapse of massive stars (between 8 and ~20M O). In the collapse, all nuclei are dissociated to produce a very compact remnant mainly composed of neutrons (with a few protons and electrons). The typical radius of the remnant is ~10km, with a density of ~10 18 kgm -3. Just like with white dwarfs, the fact that neutrons, like electrons, are fermions leads to a maximum possible mass which can be supported by neutron degeneracy- this mass is estimated to between 1.5-3M O. The dissociation of the nuclei is endothermic, using some of the gravitational energy released in the collapse. These reactions undo all the previous nuclear fusion reactions. Schwarzschild black holes: a black hole has a density such that the escape velocity is greater than the speed of light (and since not even light can escape, of course it’s going to look black). Take photons to have ‘mass’ Escape velocity If v esc > c, R<R s Schwarzschild radius End states of stars: to summarise, there are three main possibilities- Star develops a degenerate core, nuclear burning stops and the stellar envelope is lost -> degenerate (white) dwarf. Star develops a degenerate core and ignites nuclear fuel explosively -> complete disruption in a supernova. Star exhausts all of its nuclear fuel and the core exceeds the Chandrasekhar mass -> core collapse and a compact remnant (neutron star or black hole). Summary- final fate as a function of initial mass (not the mass of the end state) No hydrogen burning, supported by degeneracy pressure and Coulomb forces -> planets, brown dwarfs. M≤0.08MO: hydrogen burning but no helium burning -> degenerate helium dwarf. 0.08-0.48 MO: hydrogen and helium burning -> degenerate CO dwarf 0.48-8 MO: complicated burning sequences, no iron core -> neutron star 8-13 MO: iron core, core collapse -> neutron star or black hole 13-80 MO: pair instability, complete disruption -> no remnant M≥80MO: Binary stars: as explained earlier, to conserve angular momentum, most stars are members of binary (or multiple star) systems. The orbital period of such systems ranges from 11 minutes to 10 6 years. Most binaries are far apart with little interaction- in close binaries ( P<10yr), mass transfer from one star to another can occur. Classification of binaries Visual binaries: we can see the periodic wobbling of two stars as they orbit their centre of mass. If the motion of only one star is seen, it is known as an astrometric binary. Spectroscopic binaries: we see the periodic Doppler shifts of spectral lines. If the lines of both stars are detected, it is known as double-lined; if the Doppler shifts of only one star are detected, it is known as single-lined. Photometric binaries: the periodic variations of fluxes, colours etc are observed. The trouble here is that the same sort of variations can be caused by single variable stars (Cepheids, RR Lyrae variables). Eclipsing binaries: if the inclination of the orbital plane is ~90 o, one or both stars are eclipsed by the other one at some point during their orbit. Binary mass function Radial velocity from Doppler shift from position of COM in COM frame. Period a=a 1+a2 Gravitational force=centripetal force Substituting for a 1sin i, (a1+a2)2 etc (basically eliminating all the aterms), leads to f_2(M_1)=\frac{M_1^3\sin^3i}{(M_1+M_2)^2}=\frac{P(v_2\sin{i})^3}{2\pi G}$ These are the mass functions- they relate observables like the period and velocity to quantities of interest like masses M 1, M2 and angle i. For a double-lined spectroscopic binary, we can measure f 1 and fand hence determine 2 Mand 1sin 3i M 2sin 3i For visual and eclipsing binaries, i is known, so we can determine M 1 and M. 2 For , Measuring v 1sin i constrains M 2 For eclipsing binaries, the radii of both stars can also be determined. This is the main source of accurate masses and radii of stars (plus luminosity if the distance to the binary is known). Roche potential: here we have a restricted three-body problem where we determine the motion of a test particle in the field of two masses M 1 and Min a circular orbit about each other. 2 Equation of motion of the particle in a frame rotating with the binary masses where Ω=2π/P and the last term accounts for the Coriolis force. U eff is an effective potential given by There are five stationary (Lagrangian) points of the Roche potential U eff where the effective gravity . Three of these are saddle points, L. 1, L 2, L 3 Roche lobe: the equipotential surface passing through the inner Lagrangian point L 1– this ‘connects’ the gravitational fields of the two stars (please forgive the apology of a diagram- the shaded areas are the Roche lobes of each star). Classifications of close binaries Detached binaries:both stars underfill their Roche lobes (photospheres lie beneath respective Roche lobes). The two stars undergo gravitational interactions only. Semi-detached binaries:one star fills its Roche lobe. The Roche lobe filling component transfers matter to the detached component. These are known as mass-transferring binaries. Contact binaries:both stars overfill their Roche lobes. A common photosphere surrounding both components is formed. Binary mass transfer: 30-50% of all stars experience mass transfer by Roche-lobe overflow during their lifetimes. (Quasi-)conservative mass transfer:one star accretes mass from another. The mass loser tends to lose most of its envelope, leading to the formation of a helium star. The accretor tends to be rejuvenated, so that it appears to be a younger, more massive star. The orbit generally widens. Dynamical mass transfer:the stars share a common envelope and eventually spiral into each other. The mass donor engulfs the secondary- either the envelope is ejected to leave a very close binary or the two stars merge to form a single, rapidly rotating star. The Algol paradox: the less massive K star of the Algol binary appears to be more evolved- how can this be? In order to solve the paradox, it seems that initially the K star must have been the more massive, therefore evolving more rapidly than its companion B star. During the later stages of its evolution, however, mass is transferred from the K star to the B star- now the K star is less massive, whilst the added mass of the B star makes it more luminous. Accretion and the Eddington limit For photons, E=pc Force per unit area for a perfect absorber Actual fraction of radiation absorbed = κ per unit mass = κρ per unit volume Actual force on a fluid element Gravitational force on a fluid element per unit volume In equilibrium Rearranging, we get the maximum luminosity for a star of mass M In hydrosytatic equilibrium, there is a limit on the luminosity, but as , there is also a limit on the mass. If the limit is exceeded, internal radiation pressure drives the loss of excess mass. The Eddington limit also affects the accretion rate. If the observed luminosity latex \dot{M}$ (i.e. \latex \frac{dM}{dt}$) depends on L, it is also limited. Maximum accretion rate Violating the Eddington limit: as can be seen from the mass of supermassive black holes, the Eddington limit can be violated. This can happen under the following conditions- If the pressure is high enough to allow radiation of energy via neutrinos. If accretion is in one direction and energy is radiated in another direction. Limit to a star’s central pressure Equation of hydrostatic equilibrium For r<R, the radius of the star, $latex \frac{1}{r^4}>\frac{1}{R^4} Of course, surface pressure P s=0, so
Suppose we have the following production function: $$F(L,K)=\max_{L_K}H(L,L_K,K)=\max_{L_K}\left[(L-L_K+1)^\alpha(L_K+K)^{1-\alpha}\right]=(L-L_K^*+1)^\alpha(L_K^*+K)^{1-\alpha}$$ With the constraint $L_K\in[0,L]$. We know that $$\frac {dH}{dL_K}=\alpha(L-L_K+1)^{-1}H+(1-\alpha)(L_K+K)^{-1}H=0$$ Hence the value for $L_K$ at which the derivative is zero is $L_K^0=\frac {(1-\alpha)(L+1)+\alpha K}{1-2\alpha}$. And the optimal value $L_K^*$ is: $$ L_K^*=\begin{cases} L_K^0 &\text{ if } &0<L_K<L &(1)\\ L&\text { if } &L<L_K^0&(2)\\ 0 &\text { if } &L_K^0<0 &(3) \end{cases} $$ It is clear that if $L_K^*\in(0,L)$, (case $(1)$ ), then the envelope theorem holds: $$\frac d {dL} F(L,K)=\frac \partial {\partial L}H(L,L_K^*,K)=\alpha(L-L_K^*+1)^{-1}\cdot F(L,K)$$ Moreover, in the third case (3), it is also clear to me that the envelope theorem holds. However, I am not so sure about the second case (2). I would say that the envelope theorem does not hold in this case, because if we substitute $L_K^*$ back into the original production function, we get $$F(L,K)=1^\alpha(L+K)^{1-\alpha}$$And the derivative with respect to $L$ in this case is $$ (1-\alpha)(L+K)^{-1}\cdot F(L,K)$$ For the envelope theorem to hold in case 3, this would require $\alpha= (1-\alpha)(L+K)^{-1}$, which Almost-Always doesn't hold. So my question is: Am I right that the envelope theorem doesn't hold when $L_K$ is at a corner solution? Does this contradict the theorem, or do I misunderstand the theorem? If not, is the theorem correct?
This question already has an answer here: Let $f: \mathbb R \rightarrow \mathbb R$ defined by $$f(x) := \begin{cases}x^2 \sin \frac 1 x\ & x \neq 0\\ 0\ & x = 0\end{cases}$$ Show $f$ is differentiable on $\mathbb R$: Let $\epsilon > 0$. $x_0 \neq 0$: $\left\lvert\frac {x^2 \sin \frac 1 x - x_0^2 \sin \frac 1 {x_0}}{x-x_0} \right\rvert \le \epsilon$ I'm thinking that since the fraction is continuous there exists $\delta > 0$ satisfying the above for $x \in [-\delta,\delta] / \{0\}$ ? $x_0 = 0$: $\left\rvert\frac {x^2 \sin \frac 1 x - 0} {x} \right\rvert \le \epsilon$ How can I evaluate this limit ? I cannot say the fraction is continuous at $0$ ? Also I must find the derivative of $f$ and show it is not continuous at $x=0$, but I'm stuck.
Intuitively Deriving the Y Combinator$$ \newcommand{\one}[1]{ \color{red}{ #1 } } \newcommand{\two}[1]{ \color{blue}{ #1 } } \newcommand{\three}[1]{ \color{green}{ #1 } } \newcommand{\four}[1]{ \color{magenta}{ #1 } } $$ The $\lambda$ calculus is designed to be a model of computation, so it may be surprising that it has Quadrupling ($Q$) was chosen over doubling ($D$) and tripling ($T$) because the names $D$ and $T$ are used for other functions later in the post. Conditionals and Booleans in $\lambda$ calculus Like One could argue that an algorithm such as: Algorithm" A(x): 1. Return A(x) , which contains no base case, is still a recursive algorithm. On the contrary, Donald Knuth states in The Art of Computer Programming (Third Edition) that [an] algorithm must always terminate after a finite number of steps. Any conditional computation if $C$ then $A$ else $B$ is choosing between two results, $A$ and $B$, based on the value of $C$. If $C$ is true, then $A$ is returned, and if $C$ is false, $B$ is returned. In $\lambda$ calculus, true and false are simply defined to return the first and second argument, respectively:$$ \underbrace{ \lambda xy.x }_\text{true} \hspace{20pt} \text{ and } \hspace{20pt} \underbrace{ \lambda xy.y }_\text{false} $$ Usage of $\lambda$ booleans differ slightly from usage of booleans in most systems. Instead of needing to be wrapped in an if-statement or some other construct, they are themselves functions which do the choosing operation. In order to express:$$\text{if $x \geq 0$ then $1$ else $-1$}$$( Note that $-1$ is not the function $-$ applied to the value $1$ but rather the single unit negative one Also: $\geq$, $0$, $1$, $-1$ and all other symbols which are not parameters are not real $\lambda$ calculus constructs, but merely shorthand. Assignment does not exist in $\lambda$ calculus. These values only stand for some unspecified functions, with the promise that these functions work together as expected. For instance, that $\geq00 = \text{true}$, $+24=6$, etc. This is elaborated on in another note. Because assignment doesn't really exist and shorthand is not real $\lambda$ calculus, I tried to avoid using it in this post, generally opting for $\underbrace{\text{underbace}}$ and $\overbrace{\text{overbrace}}$ labeling notation instead. However, how $\geq$, $1$, $-1$ and the rest of the numbers are defined is out of the scope of this post, so I had little choice but to use shorthand for these items. The behavior of $sign$ is not universally agreed upon. Wikipedia defines $sign(0)=0$ (and also calls the function $sgn$), and $sign(0)$ is sometimes left undefined. I personally have found it useful (particularly in programming) to have $sign(0)=1$. I use that definition here because it's a simpler implementation, requiring only one conditional branch rather than two. Thus $sign(2)=1$ and $sign(-3)=-1$: $$ \begin{array}{ccc} (\overbrace{\lambda x. (\geq 0 x)(1)(-1)}^{sign})2 & \hspace{20pt} & (\overbrace{\lambda x.(\geq 0 x)(1)(-1)}^{sign})(-3) \\ (\geq 0 2)(1)(-1) && (\geq 0 (-3))(1)(-1) \\ (\underbrace{\lambda ab.a}_\text{true})(1)(-1) && (\underbrace{\lambda ab.b}_\text{false})(1)(-1) \\ 1 && -1 \end{array} $$ Back to Recursion If we try to write $Q$ in $\lambda$ calculus, we will quickly realize that it isn't so simple. It certainly looks something RECUR? The $Z$ function is just $isZero$, defined as: $$ Z(x) := \begin{cases} \text{true} & x = 0 \\ \text{false} & x \neq 0 \end{cases} $$ Note that $Q^R$ is not some operation $\cdot^R$ applied to $Q$ but rather just another variable name. Recursion is self-reference, so our implementation of $Q$ needs to have access to itself in its body. However, lambdas to itself as an arguemnet? Something like:$$ \underbrace{\lambda \color{red}{R}x.Zx0(+4(\color{red}{R}(-x1)))}_{Q^R} $$where $R$ is just $F$uck, there's an issue. $Q^R$ expects its first argument to be the recursive function $R$, but we're only passing it the $x$ value. We can fix this by passing $R$ itself within the body of $Q$:$$ \underbrace{\lambda Rx.Zx0(+4(\color{red}{RR}(-x1)))}_{Q^{RR}} $$and indeed this works:$$ \begin{align*}& \overbrace{\one{(\lambda Rx.Zx0(+4(RR(-x1))))}}^{Q^{RR}} \ \two{Q^{RR}} \ \three 2 \\&= \one{Z\three{2}0(+4(\two{Q^{RR}Q^{RR}}(-\three{2}1)))} \\&= \one{+4(\two{Q^{RR}Q^{RR}}(-\three{2}1))} \\&= \one{+4(\two{Q^{RR}Q^{RR}}\three{1})} \\&= \one{+4(\two{(\lambda Rx.Zx0(+4(RR(-x1)))){Q^{RR}}}\three{1})} \\&= \one{+4(\two{Z\three{1}0(+4({Q^{RR}}(-\three{1}1))))})} \\&= \one{+4(\two{+4({Q^{RR}}(-\three{1}1)))})} \\&= \one{+4(\two{+4(\four{{Q^{RR}}}{Q^{RR}}\three{0}))})} \\&= \one{+4(\two{+4(\four{(\lambda Rx.Zx0(+4RR(-x1))))}{Q^{RR}}\three{0}))})} \\&= \one{+4(\two{+4\four{(Z\three{0}0(+4\two{{Q^{RR}}}(-\three{0}1))))})})} \\&= \one{+4(\two{+4\four{((\overbrace{\lambda ab.a}^\text{true})0(+4\two{{Q^{RR}}}(-\three{0}1))))})})} \\&= \one{+4(\two{+4\four{0})})} \\&= \one{8}\end{align*} $$ In general, a function in double-$R$ form should, when passed to itself, produce the desired recursive function. We can wrap up this passing-to-self or self-invokation into $I$ is a fun function because $II =_\beta II$: $$ \begin{align*} & II \\ &= (\lambda f.ff) (\lambda g.gg) \\ &= ff[f / (\lambda g.gg)] \\ &= (\lambda g.gg) (\lambda g.gg) \\ &= II \end{align*} $$ $$ \overbrace{\lambda f.ff}^I $$ so that we can just write $IQ^{RR}$ or, for any double-$R$ $f$, $If$. If $f$ is a function in double-$R$ form, $If$ is the desired recursive function. Now I don't know about you, but writing functions in double-$R$ form seems like a bit of a pain to me. I don't want to have to remember to write $RR$ instead of $R$ each time I write a recursive function. I want $R$ to be the real recursion function. Let's make something to do this $R$-doubling for us. Call it $D$ for double:$$ \overbrace{ \lambda fR.f(RR) }^D $$ Now we can use the simpler formulation $Q^R$ instead of $Q^{RR}$ and have $D$ do the work for us: $$ \begin{align*} & (\overbrace{\one{\lambda fR.f(RR)}}^D) \ (\overbrace{\two{\lambda Rx.Zx0(+4(RR(-x1)))}}^{Q^R}) \\ &= \one{\lambda R.(\two{\lambda Rx.Zx0(+4(RR(-x1)))})(RR)} \\ &= \one{\lambda R.(\two{\lambda x.Zx0(+4(\one{RR}(-x1)))})} \\ &= \underbrace{\lambda Rx.Zx0(+4(RR(-x1))))}_{Q^{RR}} \\ \end{align*} $$ In general, $D$ should take a recursive function from single-$R$ form to double-$R$ form. And we know that a function in double-$R$ form may be transformed into the desired recursive function via $I$. Thus, if we have a function $f$ in single-$R$ form, $I(Df)$ is the desired recursive function. And $I \circ D$ takes a function from single-$R$ form to recursive form. But what is $I \circ D$? It's the same as $\lambda f. I(Df)$, which is$$ \begin{align*} & \one{\lambda f.\two{I}(\three{D}f)} \\ &= \one{\lambda f.\two{(\lambda f.ff)}(\three{(\lambda fR.f(RR))}f)} \\ &= \one{\lambda f.\two{(\lambda f.ff)}\three{(\lambda R.\one{f}(RR))}} \\ &= \one{\lambda f.\three{(\lambda R.\one{f}(RR))}\three{(\lambda R.\one{f}(RR))}} \\ &= \lambda f. (\lambda x.{f}(xx))(\lambda x.{f}(xx)) \\ \end{align*} $$ which is the $Y$ combinator! Though we can write e.g. $$ S := \lambda abc.b(abc) $$ this kind of definition of $S$ is only shorthand, and must be expanded to (i.e., replaced with) its definition before evaluation. $\lambda$ calculus does not have any rules on its own to evaluate these kinds of expressions. If we write recursive shorthand such as: $$ F := \lambda x. Fx $$ attempting to expand this for evaluation will cause infinite recursion, so it can never be evaluated as real $\lambda$ calculus. We could, of course, create a new system which allows for these things, but this new system would not be $\lambda$ calculus.
This story actually starts with Einstein's paper on the photoelectric effect. Einstein proposed that for light waves, $E \propto f$, with a proportionality constant that eventually became known as $h$. Using the relation $E = pc$ from special relativity, you can derive that $pc = hf$, and with $\lambda f = c$ you get $\lambda = \frac{h}{p}$. Remember, though, so far this only applies to light. de Broglie's insight was to use the same relation to define the wavelength of a particle as a function of its momentum. So where does your derivation go wrong? The key step is $v = \lambda f$, which applies to a wave, not a particle. As Qmechanic says, the wave velocity is not the same as the particle velocity. (The former is the phase velocity and the latter is the group velocity.) Even though $\lambda = \frac{h}{p}$ was originally taken as an assumption, you can work backwards (or forwards, depending on your view) and derive it from a more general quantum theory. For example, suppose you start with the Schroedinger equation in free space, $$i\hbar\frac{\partial \Psi(t,x)}{\partial t} = -\frac{\hbar^2}{2m}\frac{\partial^2 \Psi(t,x)}{\partial x^2}$$ Solutions to this equation take the form $$\Psi(t,x) = \sum_n C_n\exp\biggl(-\frac{i}{\hbar}\bigl(E_nt \pm x\sqrt{2mE_n}\bigr)\biggr) = \sum_n C_n\exp\biggl[-i\biggl(\omega_nt \pm k_nx\biggr)\biggr]$$ This is a wave with multiple individual components, each having angular frequency $$\omega_n = E_n/\hbar$$ and wavenumber $$k_n = \frac{\sqrt{2mE_n}}{\hbar}$$ or equivalently, frequency $$f_n = E_n/h$$ and wavelength $$\lambda_n = \frac{h}{\sqrt{2mE_n}}$$ To come up with de Broglie's relation, you need to find an expression for the momentum carried by the wave. This is done using the momentum operator $\hat{p} = -i\hbar\frac{\partial}{\partial x}$ in $\hat{p}\Psi = p\Psi$. The thing is, it only works for a wavefunction with one component. So if (and only if) all the $C_n$ are zero except one, you can get $$p_n = \mp\hbar k_n = \mp\sqrt{2m E_n}$$ and if you put that together with the definition of $\lambda_n$, you get $\lambda_n = \frac{h}{p_n}$. It may seem like a problem that this procedure only works for single-component waves. It's okay, though, because the wave doesn't actually have a single well-defined wavelength anyway unless it consists of only one component. This is a key point: whenever you talk about the wavelength of a particle, or more precisely the wavelength of the matter wave associated with a particle, you're implicitly assuming that the matter wave has only a single frequency component. This is generally a useful approximation for real particles, but it's never exactly true.
I just have a question concerning the third law of thermodynamics. The third law describes that the entropy should be a well defined constant if the system reaches the ground state which depends only on the temperature. Beside this fact we now that the temperature is independent on the measurement system we can assume that $S(T\rightarrow 0) = 0$. This is not difficult to understand. If you take a look on the definition of the entropy $S = k_B \log{\Omega}$ then this means that the number of microstates is equal to a constant or in other words, the system is in a well defined state. If you would measure $x$-times the system it will not change and you get $x$-times the same results. I'm right so far? Okay. Now we take a look on one example - the ideal gas. If we calculate the partition function we will get something like: $$ Z(T,V,N) \propto T^{3N/2} $$ And for the entropy we will get: $$ S(T,V,N) \propto \ln{T^{3/2}} $$ Both doesn't really fullfil the third law. Or is my assumption wrong? I'm mean that the entropy goes to zero? Maybe the ideal gas doesn't fulfill the third law, but my concern is that the calculation for the partition function would be almost the same ($\propto T^\alpha$) if we calculated it for an other system, based on the definition. Has anybody maybe something like a thumb rule for checking if the system fulfill the third law after calculating the entropy? Thank you for your help.
Let $\mathbf{F}=3y \ \mathbf{i} -3xz \ \mathbf{j} + (x^2-y^2) \ \mathbf{k}.$ Compute the flux of the vectorfield $\text{curl}(\mathbf{F})$ through the semi-sphere $x^2+y^2+z^2=4, \ z\geq 0$, by using direct parameterization of the surface and computation of $\text{curl}(\mathbf{F}).$ Denote the semisphere by $S$. In the book they start off by noting that $$\iint_S xy \ dS = \iint_S xz \ dS = \iint_S yz \ dS = 0 \quad (1)\\ \iint_S x^2 \ dS = \iint_S y^2 \ dS = \iint_S z^2 \ dS \quad \quad \quad (2)\\ $$ They then proceed computing the curl of the field, which is $(2x-2y,-2x,-2z-3)$ and the normal of the surface, out from the surface is $\mathbf{N}=\frac{1}{2}(x,y,z).$ So $$\text{curl}(\mathbf{F})\cdot \mathbf{N} = \frac{1}{2}(2x^2-4xy-2z^2-3z).$$ Using the symmetries (1) and (2) we get the integral $$\frac{1}{2}\iint_S-3z \ dS.$$ They then proceed to compute $dS$ by arguing that: $S$ is a part of the implicitly defined surface $F(x,y,z)=4$, where $F(x,y,z)=x^2+y^2+z^2,$ so $$dS=\left|\frac{\nabla F}{F_z}\right|dxdy = \left|\frac{(2x,2y,2z)}{2z}\right| \ dxdy= \left|\frac{(x,y,z)}{z}\right| \ dxdy=\frac{2}{z} \ dxdy \quad (3).$$ Question: Can someone intuitively, and with other examples, explain the symmetry properties (1), (2) and why equation (3) holds and what it says?
I have a question about partitions of unity specifically in the book Calculus on Manifolds by Spivak. In case 1 for the proof of existence of partition of unity, why is there a need for the function $f$? The set $\Phi = \{\varphi_1, \dotsc, \varphi_n\}$ looks like is already the desired partition of unity. Following is the theorem and proof. Only Case 1 in the proof is relevant. I belive that your assertion is correct. The functions $\varphi_{i}$ satisfy all of the conditions of Theorem 3-11. I don't see why Spivak used such an $f$. Particularly since the support of $f$ contains $A$. If at least the support of $f$ lied in $A$ then $f=\sum_{i=1}^{n}f\cdot\varphi_{i}$, thus giving a representation of $f$ as a sum of functions with small supports. Since $A$ is compact we may assume WLOG that the $U_{i}$ are bounded. Therefore, by construction, the supports of the $\psi_{i}$ are compact. Hence, the word " closed" in item ($4$) of Theorem 3-11 can be changed to " compact". The proof remains unchanged. This helps clarify the first statement of the proof of Theorem 3-12. Also, note as well that the functions $\varphi_{i}$ are $C^{\infty}$. This basically follows from Problem 2-26. Posts related to the section: I was looking back at my question today for some reason and immediately saw why the function $f$ is required. Although $\psi_i$ is smooth with compact support in $U_i$, the functions $\varphi_i$ can only be defined on $U$ where $\sum_{i = 1}^n\psi_i > 0$. The problem is that $\varphi_i$ usually does not go to zero at the boundary of $\operatorname{supp}(\psi_i)$ (much less smoothly extend to zero outside the boundary). You can see this near the boundary of a $\operatorname{supp}(\psi_i)$ which is away from all other $\operatorname{supp}(\psi_j)$. Near this boundary $\varphi_i(x) = \frac{\psi_i(x)}{\psi_i(x)} = 1$. The solution is to use a cutoff function $f$ which forces everything to smoothly go to $0$ near the boundary of $U$. Merry Christmas ! I am sorry to bother you.I revised my answer.Looking foward to your comments. First of all, as Mr.John mentioned in his answer,the functions $\varphi_{i}$ have already satisfied all of the conditions of but except for $\textit{(4)}$. A crucial question in Spivak's proof is: why the author "redundantly" required $f\cdot\varphi_{i}$ rather than $\varphi_{i}$ ?In my way of thinking , in accord with Th 3-11 ,the author wanted to extend $\varphi_{i}$ from its domain$-\ $an open subset in $\mathbf {R}^{n}$$-\ $to the whole of $\mathbf {R}^{n}$ smoothly and then guarantee such those extended functions $f\cdot \varphi_{i}$ satisfying all of four conditions. problem 2-26$^*$(d) The following is details: Def. $\;\phi:X \rightarrow \mathbf {R}$ is a continuous real-valued function whose domain is an arbitrary set $X$ in $\mathbf {R}^{n}$, the support of $\phi$ is defined as the closure of the subset of $X$ where $\phi$ is non-zero $i.e.,\:$$supp(\phi):=$the closure of the set $\left\{\mathbf{x} \in X: \phi(\mathbf{x})\ne0 \right \}.$ Def. $\;\phi\in C^{\infty}(A,B)$ denotes $\phi:(\mathbf {R}^{n}\supset )A\rightarrow B(\subset\mathbf {R}^{m})$ is a $C^{\infty}$ function. From the ,the compact sets $D_{i}(i=1,\cdots,n)$ whose interiors cover $A$.In order to explain why the author modified $\varphi_{i}$ with $f\cdot \varphi_{i}$ briefly,we can choose a specific open subset$-\ U:=\bigcup_{i=1}^{n}int D_{i}$$-\ $to set forth.If in the simplest case we need $\varphi_{i}$ multiplied by $f$,not to mention in most cases. case 1 $\textbf{1.}$ $\psi_i\in C^{\infty}(U_i,\mathbf{R}),$ which is positive on $D_i$ and $0$ outside of some closed set contained in $U_i$ .( problem 2-26$^*$(d)) Define $$\widetilde \psi_{i}:=\left\{\begin{matrix} \psi_{i}& x\in U_i\\ 0& x\in {U_i}^{c} \end{matrix}\right.\quad (i=1,2,\cdots,n),$$ then we have $\widetilde \psi_{i}\in C^{\infty}(\mathbf{R}^{n},\mathbf{R})$,$\;\widetilde \psi_{i}\bigg|_{U_{i}}=\widetilde \psi_{i}$ and $\psi_{i}$ (each domain is $U_{i}$) can be smoothly extended to $\widetilde \psi_{i}$(each domian is $\mathbf{R}^{n}$). All functions $$\varphi_{i}=\frac{\psi_{i}}{\sum_{k=1}^{n}\psi_{k}}\;(i=1,\cdots,n)$$ are only constructed on $U$. $$ \forall \: x\in U,\: \sum_{k=1}^{n}\widetilde\psi_{k}\ne 0 ;\;\widetilde\psi_{k}\in C^{\infty}(\mathbf{R}^{n},\mathbf{R}).$$ $$\Longrightarrow \varphi_{i}=\frac{\psi_{i}}{\sum_{k=1}^{n}\psi_{k}}=\frac{\widetilde\psi_{i}}{\sum_{k=1}^{n}\widetilde\psi_{k}}\in C^{\infty}(U,\mathbf{R}) \quad (i=1,2,\cdots,n).$$ $\textbf{2.}$ $f\in C^{\infty}(U,\mathbf{R}),$ which value is $1$ on $A$ and $0$ outside of some closed set contained in $U$.( problem 2-26$^*$(d)) $\\$Obviously,we have $f\cdot \varphi_{i}\in C^{\infty}(U,\mathbf{R})\;(i=1,2,\cdots,n).$ Note that each $f\cdot \varphi_{i}\in C^{\infty}(U,\mathbf{R})$ is only constructed on $U\:!$ $\textbf{3.}$ Finally,we can extend $f\cdot \varphi_{i}$ from $U$ to the whole of $\mathbf {R}^{n}$ smoothly. In fact,since $supp(f\cdot\varphi_{i})\subset U$, define $$\widetilde{f\cdot \varphi_{i}}:=\left\{\begin{matrix} f\cdot \varphi_{i}& x\in supp(f\cdot\varphi_{i})\\ 0 & x\notin supp(f\cdot\varphi_{i}) \end{matrix}\right.\quad (i=1,2,\cdots,n),$$ then we have $\widetilde {f\cdot \varphi_{i}}\in C^{\infty}(\mathbf{R}^{n},\mathbf{R})$,$\;\widetilde {f\cdot \varphi_{i}}\bigg|_{U}=f\cdot\varphi_{i}$ and $f\cdot \varphi_{i}$ (each domain is $U$) can be smoothly extended to $\widetilde {f\cdot \varphi_{i}}$ (each domian is $\mathbf{R}^{n}).\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\blacksquare$
Group Action on Subgroup by Left Regular Representation Theorem Let $G$ be a group. Let $H$ be a subgroup of $G$. Let $*: H \times G \to G$ be the operation defined as: $\forall \left({h, g}\right) \in H \times G: h * g = \lambda_h \left({g}\right)$ where $\lambda_h \left({g}\right)$ is the left regular representation of $g$ by $h$. Then $*$ is a group action. Proof The group action axioms are investigated in turn. Let $h_1, h_2 \in H$ and $g \in G$. Thus: \(\displaystyle h_1 * \paren {h_2 * g}\) \(=\) \(\displaystyle h_1 * \paren {\map {\lambda_{h_2} } g}\) Definition of $*$ \(\displaystyle \) \(=\) \(\displaystyle h_1 * \paren {h_2 \circ g}\) Definition of Left Regular Representation \(\displaystyle \) \(=\) \(\displaystyle \map {\lambda_{h_1} } {h_2 \circ g}\) Definition of $*$ \(\displaystyle \) \(=\) \(\displaystyle h_1 \circ \paren {h_2 \circ g}\) Definition of Left Regular Representation \(\displaystyle \) \(=\) \(\displaystyle \paren {h_1 \circ h_2} \circ g\) Group Axiom $G \, 1$: Associativity \(\displaystyle \) \(=\) \(\displaystyle \map {\lambda_{h_1 \circ h_2} } g\) Definition of Left Regular Representation \(\displaystyle \) \(=\) \(\displaystyle \paren {h_1 \circ h_2} * g\) Definition of $*$ demonstrating that Group Action Axiom $GA \, 1$ holds. Then: \(\displaystyle e * g\) \(=\) \(\displaystyle \map {\lambda_e} g\) Definition of $*$ \(\displaystyle \) \(=\) \(\displaystyle e \circ g\) Definition of Left Regular Representation \(\displaystyle \) \(=\) \(\displaystyle g\) Group Axiom $G \, 2$: Identity demonstrating that Group Action Axiom $GA \, 2$ holds. $\blacksquare$ Also defined as Thus under such a convention: $\map {\lambda_h} a$ is written $a \lambda_h$ and: $h * a$ is written $a * h$ (or even $a h$ in sources which do not place a high regard on clarity). \(\displaystyle \paren {a * h_2} * h_1\) \(=\) \(\displaystyle \paren {a \lambda_{h_2^{-1} } } * h_1^{-1}\) Definition of $*$ \(\displaystyle \) \(=\) \(\displaystyle a \lambda_{h_2^{-1} } \lambda_{h_1^{-1} }\) Definition of Left Regular Representation \(\displaystyle \) \(=\) \(\displaystyle a * \paren {h_1 \circ h_2}^{-1}\) Definition of $*$ As it runs contrary to the conventions used on $\mathsf{Pr} \infty \mathsf{fWiki}$, beyond its mention here it will not be used. However, this example indicates how the arbitrary nature of notational conventions can cause the details of results to be equally arbitrarily dependent upon the convention used. Also see
Let's say I have $10$ biased coins. Each coin has a different probability for head. $$\text{coins} = [10\%, 20\%, 30\% ...]$$ I flip each coin once What is the probability of getting: At least $3$ heads Exactly $3$ heads Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community You can use a probability generating function. If the probability that coin $i$ comes up heads is $p_i$ for $i = 1,2 ,3 \dots ,10$, then the probability that you will get exactly $n$ heads when the ten coins are tossed is the coefficient of $x^n$ in $$\prod_{i=1}^{10} (1 - p_i + p_i x)$$ when the product is expanded. This is easy if you have a computer algebra system but tedious otherwise. The result when $p_i = 0.1 i$ is $$0.00036288 x^{10}+0.00699984 x^9+0.0482076 x^8+0.159749 x^7+0.28468 x^6+0.28468 x^5 \\+0.159749 x^4+0.0482076 x^3+0.00699984 x^2+0.00036288 x$$ so the probability of exactly three heads is $0.0482076$. For the probability of at least three heads, add the probabilities for one or two heads and subtract from 1. (It is not possible to get zero heads in this example, because $p_{10} = 1$.) The solution is easy but cumbersome (if I am using the right word). So, I will show the solution for $4$ coins (for the sake of simplicity.) And assume the probabilities to get heads are different: $a,b,c,d$. If we have $4$ coins then there are 4 possibilities to get exactly $3$ heads and there is one more possibility to get at least $3$ heads as shown below Accordingly the probability to get exactly $3$ heads is $$abc+acd+abd+bcd.$$ If we want the "at least" case then we have to add $abcd$. In the case of $10$ coins the number of possibilities for exactly $3$ heads is ${10 \choose 3}=120$ and all have to be listed like above. In the "at least" case we have much more possibilities to be listed: $\sum_{i=3}^{10} {10\choose i}$ and all have to be depicted. I wrote some r code to simulate the process of flipping the biased coins. Will 1 million iterations, I found $ p(H=3) \approx 0.0481$ which agrees nicely with @awkward's answer, and $p(H\ge 3) \approx 0.992$ niter <- 1e6 # number of iterationsp <- c(0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0) # probabilitiesresults <- rep(0, niter)for(i in 1:niter){ trial <- runif(10) < p; results[i] <- sum(trial);}sum(results == 3) / niter # p(H = 3)sum(results >= 3) / niter # p(H >=3)hist(results, breaks = c(-1:10)+0.5, freq = FALSE, xlab="Number of Heads", main = paste("Histogram of \n",paste(p, collapse = ", "))) Here are just some illustration about the calculation process for the "exactly case": Assume throwing the coin just for $4$ times (not for $10$ times). There are in total $\binom{4}{3}=4$ possiblities for getting a head exactly three times (not $\binom{10}{3}=120$ possibilites). The probability for the $i$-th coin to be a head is $p_i^\mathrm{H}=p_i=10i\%=i/10$. The probability for the $i$-th coin to be a tail is $p_i^\mathrm{T}=1-p_i=1-i/10$ for each $1\leq i \leq 10$. Each possibility has a different probability and that's what makes the question difficult. The probability for exactly $3$ heads can be calculated for this simplified case as: \begin{align*} p &= p_1^\mathrm{T}~p_2^\mathrm{H}~p_3^\mathrm{H}~p_4^\mathrm{H}~+~p_1^\mathrm{H}~p_2^\mathrm{T}~p_3^\mathrm{H}~p_4^\mathrm{H}~+~p_1^\mathrm{H}~p_2^\mathrm{H}~p_3^\mathrm{T}~p_4^\mathrm{H}~+~p_1^\mathrm{H}~p_2^\mathrm{H}~p_3^\mathrm{H}~p_4^\mathrm{T} \\&= (1-p_1^\mathrm{H})~p_2^\mathrm{H}~p_3^\mathrm{H}~p_4^\mathrm{H}~+~p_1^\mathrm{H}~(1-p_2^\mathrm{H})~p_3^\mathrm{H}~p_4^\mathrm{H}~+~p_1^\mathrm{H}p_2^\mathrm{H}~(1-p_3^\mathrm{H})~p_4^\mathrm{H}~+~p_1^\mathrm{H}~p_2^\mathrm{H}~p_3^\mathrm{H}~(1-p_4^\mathrm{H}) \\&= (1-p_1)~p_2~p_3~p_4~+~p_1~(1-p_2)~p_3~p_4~+~p_1~p_2~(1-p_3)~p_4~+~p_1~p_2~p_3~(1-p_4)\end{align*} If we were to throw the coin $10$ times and ask for the probability of $3$ times head, then it's not very different than the case with throwing the coin $4$ times. We will just be having $10$ levels in the probability tree. Some of the probabilities for the possibilities will look like given below, others will look just as similar, in total we will have to add up the probabilities of all the following probabilities for the $120$ possibilities, which of course are not all given, but I think you can imagine $$ 120 \text{ probabilities } \begin{cases}p_1~p_2~p_3~(1-p_4)~(1-p_5)~(1-p_6)~(1-p_7)~(1-p_8)~(1-p_9)~(1-p_{10})\\(1-p_1)~p_2~p_3~p_4~(1-p_5)~(1-p_6)~(1-p_7)~(1-p_8)~(1-p_9)~(1-p_{10})\\\hspace{6.5cm}\vdots\\ (1-p_1)~(1-p_2)~(1-p_3)~(1-p_4)~(1-p_5)~(1-p_6)~(1-p_7)~p_8~p_9~p_{10}\\\hspace{6.5cm}\vdots\\ p_1~(1-p_2)~(1-p_3)~(1-p_4)~(1-p_5)~p_6~(1-p_7)~(1-p_8)~(1-p_9)~p_{10}\\ (1-p_1)~(1-p_2)~p_3~(1-p_4)~(1-p_5)~p_6~(1-p_7)~p_8~(1-p_9)~(1-p_{10})\\(1-p_1)~p_2~(1-p_3)~(1-p_4)~p_5~(1-p_6)~(1-p_7)~(1-p_8)~p_9~(1-p_{10})\\\hspace{6.5cm}\vdots \end{cases}$$ Maybe there is a closed formula for the sum of all these $120$ probabilities. I would go for programming this.
Traditionally, the null hypothesis is a point value. (It is typically $0$, but can in fact be any point value.) The alternative hypothesis is that the true value is any value other than the null value. Because a continuous variable (such as a mean difference) can take on a value which is indefinitely close to the null value but still not quite equal and ... Consider the case where the null hypothesis is that a coin is 2 headed, i.e. the probability of heads is 1. Now the data is the result of flipping a coin a single time and seeing heads. This results in a p-value of 1.0 which is greater than every reasonable alpha. Does this mean that the coin is 2 headed? it could be, but it could also be a fair coin and ... Ok, here's my first attempt. Close scrutiny and comments appreciated!The Two-Sample HypothesesIf we can frame two-sample one-sided Kolmogorov-Smirnov hypothesis tests, with null and alternate hypotheses along these lines:H$_{0}\text{: }F_{Y}\left(t\right) \geq F_{X}\left(t\right)$, andH$_{\text{A}}\text{: }F_{Y}\left(t\right) < F_{X}\left(t\right)$,... The logic of TOST employed for Wald-type t and z test statistics (i.e. $\theta / s_{\theta}$ and $\theta / \sigma_{\theta}$, respectively) can be applied to the z approximations for nonparametric tests like the sign, sign rank, and rank sum tests. For simplicity I assume that equivalence is expressed symmetrically with a single term, but extending my answer ... I have recently thought about an alternative way of "equivalence testing" based on a distance between the two distributions rather than between their means.There are some methods providing confidence intervals for the overlap of two Gaussian distributions:The overlap $O(P_1,P_2)$ of (between?) two distributions $P_1$ and $P_2$ has a nice probabilistic ... The short answer is yes, you can do it, since the TOST methodology is not restricted to t-tests. The p-value is the larger of the two p-values. A quick Google search led me to a methodological article (Meier U. Nonparametric equivalence testing with respect to the median difference. Pharm Stat. 2010 Apr-Jun;9(2):142-50) describing this procedure in detail. Absence of evidence is not evidence of an absence (the title of an Altman, Bland paper on BMJ). P-values only give us evidence of an absence when we consider them significant. Otherwise, they tell us nothing. Hence, absence of evidence. In other words: we don't know and more data may help. While one can use the t test to test for proportion difference, the z test is a tad more precise, since it uses an estimate of the standard deviation formulated specifically for binomial (i.e. dichotomous, nominal, etc.) data. The same applies to the z test for proportion equivalence.First, the z test for difference in proportions of two independent ... Your logic applies in exactly the same way to the good old one-sided tests (i.e. with $x=0$) that may be more familiar to the readers. For concreteness, imagine we are testing the null $H_0:\mu\le0$ against the alternative that $\mu$ is positive. Then if true $\mu$ is negative, increasing sample size will not yield a significant result, i.e., to use your ... We never "accept the null hypothesis" (without also giving consideration to power and minimum relevant effect size). With a single hypothesis test, we pose a state of nature, $H_{0}$, and then answer some variation of the question "how unlikely are we to have observed the data underlying our test statistic, assuming $H_{0}$ (and our distributional assumption)... An alternative to TOST in equivalence testing is based on the confidence interval approach:Let $\Delta$ denote the prespecified equivalence margin and$$\theta := \sup_t |F_X(t) - F_Y(t)|$$the Kolmogorov-Smirnov distance between the unknown underlying distribution functions.Now, if a 90% confidence interval for $\theta$ is completely within $[-\... Regression table presentations are easy enough to modify to accommodate tests for equivalence, including relevance tests—where you base conclusions off of both tests for difference (tests of $H^{^{+}}_{0}$) and tests for equivalence (tests of $H^{^{-}}_{0}$). For example (assuming you are presenting multiple tests in a regression context, hence the $\beta$):... The $1-2\alpha$ is not because you calculate the CI for each group separately. It is because you calculate the "inequivalence" to the upper and to the lower end separately. The parameter $\theta$ lies in the equivalence interval $[\epsilon_L, \epsilon_U]$ iff $$\theta \geq \epsilon_L \wedge \theta \leq \epsilon_U.$$Each part is tested separately by a one ... The null hypothesis, $H_0$, is usually taken to be the thing you have reason to assume. Often times it is the "current state of knowledge" that you wish to show is statistically unlikely.The usual set-up for hypothesis testing is minimize type I error, that is, minimize the chance that we reject the null hypothesis in favor of the alternative $H_1$ even ... Yes. This is equivalence testing. Basically you reverse the null and alternative hypothesis and base the sample size on the power to show that the difference of the means is within the window of equivalence. Blackwelder called it "Proving the null hypothesis." This is commonly done in pharmaceutical clinical trials where equivalence of a generic drug to ... I think one can do all these multiple tests of equivalence within a single linear-mixed models. Given you have multiple (2+) measures after the change took place it is rather natural to present these multiple tests as part of a single repeated-measurements model.In particular, one could define indicator variables between the successive steps and then check ... Very interesting question!!You are using the logical consequence, i.e., the entailment condition. This entailment condition forms the very basis of the classical logic, it guarantees the inference or deduction of a result from a premise.The reasoning behind your proposal follows:If $H_0$ entails $H_0'$, then the observed data should draw more evidence ... It's not a TOST per se, but the Komolgorov-Smirnov test allows one to test for the significance of the difference between a sample distribution and a second reference distribution you can specify. You can use this test to rule out a specific kind of different distribution, but not different distributions in general (at least, not without controlling for ... First question: UMP is, nomen es omen, most powerful. If both the sample size and the equivalence region are small, it may happen to the TOST that confidence intervals will hardly ever fit into the equivalence region. This results in nearly zero power. Also, the TOST is generally conservative (even with an $1-2\alpha$ confidence interval). Whenever the UMP ... When conducting the Kolmogorov-Smirnov test, we assume $H_0:$ the two distributions are equivalent. We then calculate a test statistic and, if the corresponding $p$-value is small enough, we reject $H_0$ and conclude $H_A:$ the two distributions are different.As far as hypothesis tests go, we use a $p$-value to quantify the amount of evidence we have to ... These arguments from "Bayesian Estimation Supersedes the t-Test" seem relevant:This article introduces an intuitive Bayesian approach to the analysis of data from two groups. In particular, the analysis reveals the relative credibility of every possible difference of means, every possible difference of standard deviations, and all possible effect sizes. ... I don't know any specific references for this case.In analogy to some of the methods for repeated measures ANOVA, the relevant t-test would use the mean of the two 'after' observations and compare it with the 'before' observation.The variance of the average within difference will be smaller with more observations per individual, so the test still takes ... Since no one answered, I will try to show what I understood and how it should be choosen the region of similarity.The null hypothesis is $H_0 : |\mu_1 - \mu_2| > \varepsilon$ (where $\mu_1$ and $\mu_2$ are means for controller 1 and 2). Thus $\varepsilon$ is the minimum value by which we will define how similar is the response (tracking error) of the ... As I understand it, your model is:$$\text{SEND} = \beta_{0} + \beta_{pt}pt + \beta_{partner}partner + \beta_{town}town + \mathbf{B}_{controls}\mathbf{controls} + \varepsilon$$So your estimated effect of treatment on SEND is given by $\hat{\beta}_{pt}$. Tests for difference are reported in the vanilla output for linear regression in Stata:To the right ... I was also wondering if an epsilon of 2 sets a margin of 2 above and 2 below or a margin of 1 above and 1 below for a total rang of 2. As I couldn't find the answer nor understand the R code I contacted directly the author of the package "equivalence", professor Robinson, who very kindly answered:An epsilon of 2 sets margin for 2 above and 2 below. I think it is a synonym to the equivalence margin. Because there is no exact equivalence it is a range of the similarity.Here is a article which describes the Equivalence and Noninferiority Testing.EDIT: For the hypothesis testing it is necessary to set a acceptable range of unequality. The deviation of the equal margin can be in positive and negative ... Normally with repeated outcome variables one would use a mixed effects ANOVA. The baseline assessment is considered fixed, or given, so it's simply a matter of using an offset. The inference is then based around the intercept, $\beta_0$ of the following linear model:\begin{equation}X^{b,i} - \mbox{offset}(X^{a, i}) = \beta_0 + \gamma_i\end{equation}...
Where priors come from Rob Zinkov 2015-06-09 When reading up on Bayesian modeling papers it can be bewildering to understand why a paricular prior was chosen. The distributions usually are named after people and the density equations are pretty scary. This makes it harder to see why the models were successful. The reality is that many of these distributions are making assumptions about the type of data we have. Although there are hundreds of probability distributions, its only a dozen or so that are used again and again. The others are often special cases of these dozen or can be created through a clever combination of two or three of these simpler distributions. I’d like to outline this small group of distributions and say what assumptions they encode, where our data came from, and why they get used. Often a prior is employed because the assumptions of the prior match what we know about the parameter generation process. Keep in mind, there are multiple effective priors for a particular problem. The best prior to use for a problem is not some wisdom set in stone. A particular prior is chosen as some combination of analytic tractability, computationally efficiency, and does it make other recognizable distributions when combined with popular likelihood functions. Better priors are always being discovered by researchers. Distributions are understandable in many different ways. The most intuitive for me is to plot a histogram of values. Here is some helper code for doing that. Uniform distribution The uniform distribution is the simplest distribution to explain. Whether you use this one in its continuous case or its discrete case it is used for the same thing. You have a set of events that are equally likely. Note, the uniform distribution from \(\infty\) to \(-\infty\) is not a probability distribution. This requires you give lower and upper bounds for your values. This distribution isn’t used as often as you’d think, since its rare we want hard boundaries on our values. Normal distribution The normal distribution is possibly the most frequently used distribution. Sometimes called the Gaussian distribution. This has support over the entire real line. It makes it really convenient, because you don’t have to worry about checking boundaries. This is the distribution you want if you find yourself saying things like, “the sensor sayss 20 km/h +/- 5 km/h”. The normal distribution takes as arguments a center (\(\mu\)) and spread or standard deviation (\(\sigma\)). The value \(\sigma^2\) comes up a lot is sometimes called the variance. Standard deviation states that 67% of your data is within one standard deviation of the center, and 95% is within two standard deviation. As an example, if I say my data comes from normal(0, 3) I mean that 95% of my data should be between -6 and 6. It will always have a single maximum value, so if the distribution you had in mind for your problem has multiple solutions, don’t use it. The normal also comes up a lot because if you have multiple signals that come from any distribution, with enough signals their average converges to the normal distribution. This is one version of what is called, Central Limit Theorem. Feel free to change that uniform and its arguments to any other distribution and you’ll see its always a normal. Bernoulli distribution The bernoulli distribution is usually the first distribution people are taught in class. This is the distribution is for deciding two choices. It takes an argument \(\rho\) which dictates how biased are we to select 0 and 1. These numbers are also considered stand-ins for success (1) and failure (0) and usually talked about in these terms. Bernoulli can be written in terms of uniform. In that example, we made \(\rho\) 0.7 and obtain this histogram: Bernoulli is also handy since you can define a bunch of distributions in terms of them. The Binomial distribution is a distribution on natural numbers from \(0\) to \(n\) inclusive. It takes a bias \(\rho\) and counts how often a coin flip succeeds in n trials. Another distribution which can be encoded with bernoulli is the Negative Binomial distribution. This distribution counts how often the coin flip will succeed if you are allowed to fail up to \(r\) times. Categorical distribution Categorical is the distribution when you have a variable that can take on a discrete set of values. Its arguments are the probability you believe for each value of appearing. This can be simulated by indexing these values with natural numbers. One thing to note, lots of folks like doing a One-Hot encoding so we represent which sample as a binary vector where our choice is one and all the other elements of the vector are zero. def onehot(n, k): return np.eye(1, n, k=k)[0]def categorical2(ps): def samples(s): l = len(ps) return np.array([onehot(l, r.choice(range(l), p=ps)) for i in range(s)]) return samples>> categorical2([0.2, 0.5, 0.3])(5)array([[ 0., 1., 0.], [ 0., 1., 0.], [ 0., 1., 0.], [ 0., 1., 0.], [ 0., 0., 1.]]) Gamma distribution The gamma distribution comes up all over the place. The intuition for the gamma is it is the prior on positive real numbers. Now there are many ways to get a distribution over positive numbers. We can take the absolute-value of a normal distribution and get what’s called a Half-Normal distribution. If we have a variable \(x\) from the normal distribution, we can also do \(exp(x)\) and \(x^2\) to get distributions over positive numbers. These are the Lognormal and \(\chi^2\) distributions. So why use the gamma distribution? Well when you use the other distributions, you are really saying you believe your variable has some latent property that spans the real-line and something made it one-sided. So if you use a lognormal, you are implicitly saying you believe that the log of this variable is symmetric. The assumption of \(\chi^2\) is that your variable is the sum of k squared factors, where each factor came from the normal(0, 1) distribution. If you don’t believe this, why say it? Some people suggest using gamma because it is conjugate with lots of distributions. This means that when gamma is a used as prior to something like normal, the posterior of this distribution also is a gamma. I would hesitate to use a prior just because it makes performing a computation easier. It’s better to have your priors actually encode what you believe. You can always go back later and make something conjugate once you need something more efficient. The gamma distribution is the main way we encode something to be a postive number. It’s parameters shape \(k\) and scale \(\theta\) roughly let you tune gamma like the normal distribution. \(k \theta\) specifies the mean value we expect, and \(k \theta^2\) specifies the variance. This is a common theme in most distributions. We want two parameters so we can set the mean and variance. Distributions that don’t have this feature are usually generalized until they do. As an example, here is what gamma(5, 1) looks like. Note where it peaks and how it spreads out. Change the parameters and see how it changes Also distributions like \(\chi^2\) (chi-squared) can be defined in terms of gamma. Actually many distributions can be built from gamma. Taking the reciprocal of a variable from the gamma gives you a value from the Inv-gamma distribution. If we normalize this positive number to be between 0 and 1, we get the Beta distribution. If we want to a prior on say categorical, which takes as an argument a list of numbers that sum to 1, we can use a gamma to generate k-numbers and then normalize. This is precisely the definition of the Dirichlet distribution. Poisson Distribution The poisson distribution is seen as the distribution over event arrival times. It takes an average arrival rate \(\lambda\) and returns the number of events you can expect. In this sense, it’s a distribution over natural numbers. It can be thought of as a discrete analog to the gamma distribution. Unfortunely, poisson in this form is a bit cumbersome to use. For one with poisson, mean and variance are both \(\lambda\). You can’t tune this distribution to have them be different. To do that, we note that \(\lambda\) has to be a positive real number and put a gamma prior on it. This is distribution is sometimes called the Overdispersed Poisson, but its also a reparameterized Negative Binomial! Different concept same math equation. A related distribution to poisson is the exponential distribution. This distribution measures the time we can expect to elapse between events. If we can tune the rate events happen to change with time, we get distributions that are good at modeling how long until an object fails. One example of such a distribution is the Weibull distribution. It has a few forms, but the easiest is one that has \(\lambda\) for the rate at which events in this case usually failure, and an extra argument \(k\) which models if the rate of failure should increase as time goes on. A value of \(k=1\) is just the exponential distribution. A value lower than 1 means failure gets less likely over time and a value over 1 means a failure gets more likely over time. For the more morbid, you can ask if human mortality is measured with a weibull distribution. Actually, its the Gompertz distribution that is used. It turns out to be the distribution you get when you call exp on samples from the weibull. Heavy-tailed distributions Often distributions are too optimistic about how close a value stays near the mean. The following are all distributions which are said to have heavy-tails. The major advantage of using a heavy-tail distribution is it’s more robust towards outliers. Cauchy is a nasty distribution. It has no well-defined mean or variance or any moments. Typically, this is used by people who are or were at one time physicists. You can make a cauchy by taking two samples from a normal distribution and dividing them. I hesitate to recommend it since its hard to work with and there are other heavy-tailed distributions that aren’t so intractable. Student-T or \(t\) can be interpretted as the distribution over a subsampled population from the normal distribution. What’s going on here is that because our sample size is so small, atypical values can occur more often than they do in the general population. As your subpopulation grows, the membership looks more like the underlying population from the normal distribution and the t-distribution becomes the normal distribution. The parameter \(\nu\) lets you state how large you believe this subpopulation to be. The t-distribution can also be generalized to not be centered at 0. Laplace distribution arises as an interesting modification to the normal distribution. Let’s look at the density function for normal \[ \frac{1}{\sigma\sqrt{2\pi}}\, exp\left({-\frac{(x - \mu)^2}{2 \sigma^2}}\right)\] That function inside \(exp(\dots)\) can be seen as an L2 norm on our variable. If we replace it with an L1 norm and change the denominator so it all still sums to one we get the laplace. In this way, a laplace centered on 0 can be used to put a strong sparsity prior on a variable while leaving a heavy-tail for it if the value has strong support for another value. There are a whole family of distribution available by putting in different norms. Multivariate Normal When working with multivariate distributions, most can be seen as generalizations of univariate distributions. All those assumptions, from those still hold. We already mentioned Dirichlet and Categorical as multivariate distributions. The big thing you get with a multivariate generalization is the ability to encode how you strongly you believe a collection of variables is correlated with each other. The Multivariate Normal generalizes the normal distribution to multiple variables. Now \(\mu\) refers to the center of each of them. But \(\sigma\), our standard deviation isn’t just the standard deviation in each variable. Instead we get a covariance matrix that let’s us dictate how correlated each variable is with every other variable. To visualize this we need some helper code. So let’s see what happens when we say, the first and second variables are 0.5 correlated. This shows that values in the first variable will be near values in the second. You don’t see them diverging too often from each other. Keep adjusting the matrix to see what it looks when other values are used. Wishart and LKJ The wishart distribution is the prior on symmetric matrices. It takes as arguments a multivariate normal distribution and a variable \(\nu\) which called the degrees of freedom. I think its best to think of \(\nu\) in terms of matrix factorization. The wishart is made by first making a \(\nu \times p\) matrix \(X\) by concat \(\nu\) draws from the multivariate normal and then squaring it. The degrees of freedom let you set the what you think is the matrix’s true dimensionality. This is commonly used as a prior on convariance matrices for multivariate normal distributions. But wait! What does this anything in its generative story have to do with covariance matrices? Lots of people have thought this. A more modern alternative is the LKJ distribution. It also takes a covariance matrix and tuning parameter. The difference is here is that the LKJ tuning parameter \(\eta\) controls how independent are the variables. When it is set to 1, it is uniform over the entire matrix. As you set \(\eta\) to larger and larger values, more and more weight is concentrated on the diagnol, meaning we believe that the variables are mostly independent. This prior is easier for some people to tune. The method for generating samples from this distribution is a little bit tedious. See this fantastic writeup if you are interested in the details. It’s arguably better and more understandable than the original paper. def vine(eta, d): def one_sample(): b = eta + (d - 1)/2 P = np.zeros((d, d)) S = np.eye(d) for k in range(d-1): b = b - 0.5 for i in range(k+1, d): P[k, i] = 2*r.beta(b, b) - 1 p = P[k, i] for l in range(k-1, 0, -1): p = p * np.sqrt((1-P[l, i]**2) * (1-P[1, k]**2)) + P[l, i]*P[l, k] S[k, i] = p S[i, k] = p return S def samples(s): return np.array([one_sample() for i in range(s)]) return samplesm = vine(2.0, 30)(1)[0]plt.matshow(m, cmap="BuPu") Multinomial Categorical in the One-Hot encoding can also be seen as a special-case of the Multinomial distribution. The multinomial is related to the categorical like bernoulli is related to binomial. This distribution given \(n\) trials with the categorical counts how often each of the outcomes happened. This distribution appears often in natural language processing as a prior on bag-of-word representation for documents. A document can be represented as a list of how often each word occured in that document. The multinomial in that sense can be used to encode our beliefs about the vocabularies. Conclusions This doesn’t cover all the distributions people are using for priors out there, but it should give an intuition for why the most common ones are in use.
$$\mathfrak{I}=\int\limits_0^1 \left[\log(x)\log(1-x)+\operatorname{Li}_2(x)\right]\left[\frac{\operatorname{Li}_2(x)}{x(1-x)}-\frac{\zeta(2)}{1-x}\right]\mathrm dx=4\zeta(2)\zeta(3)-9\zeta(5)\tag1$$ This integral haunts me for while and I am still unable to evaluate it which annoys me even more. For the first time I have encountered it within Mathematical Analysis $-$ A collection of Problems by Tolaso J. Kos $($Page $27$, Problem $282$$)$ and I am still stumped by this one. However, today I am have come across this question asking for the evaluation of the integral $$\mathfrak{J}=\int\limits_0^{\pi/2}\frac{\log^2(\sin x)\log^2(\cos x)}{\sin x\cos x}\mathrm dx=\frac12\zeta(5)-\frac14\zeta(2)\zeta(3)\tag2$$ Which can done be "rather simple" by invoking the fourth derivative of the Beta Function. I was baffled as I recognized the structure of the final value; moreover reminding me of the logarithmic integral $(1)$ I was not able to evaluate. It might turn out that this relation is by pure chance but nevertheless it motivated me to look at $(1)$ again. It is hardly probable that $(1)$ can be done in a similiar way like $(2)$ in xpaul's answer to the linked question due the involved Dilogarithms $-$ but anyway you can prove me wrong. I have not got that far with $(1)$ but however I noticed two, I would say quite interesting, facts about the integral. First of all consider the following, well-known functional relation of the Dilogarithm $$\operatorname{Li}_2(x)+\operatorname{Li}_2(1-x)=\zeta(2)-\log(x)\log(1-x)$$ which can be used in order to get rid of the $\log(x)\log(1-x)$-term within $(1)$ and leading to $$\small\int\limits_0^1 \left[\log(x)\log(1-x)+\operatorname{Li}_2(x)\right]\left[\frac{\operatorname{Li}_2(x)}{x(1-x)}-\frac{\zeta(2)}{1-x}\right]\mathrm dx=\int\limits_0^1 \left[\zeta(2)-\operatorname{Li}_2(1-x)\right]\left[\frac{\operatorname{Li}_2(x)}{x(1-x)}-\frac{\zeta(2)}{1-x}\right]\mathrm dx$$ Secondly applying the subsititution $x=1-x$ after a minor reshape yields to $$\small\begin{align} \int\limits_0^1 \left[\zeta(2)-\operatorname{Li}_2(1-x)\right]\left[\frac{\operatorname{Li}_2(x)}{x(1-x)}-\frac{\zeta(2)}{1-x}\right]\mathrm dx&=\int\limits_0^1 \left[\frac{\zeta(2)}{1-x}-\frac{\operatorname{Li}_2(1-x)}{1-x}\right]\left[\frac{\operatorname{Li}_2(x)}x-\zeta(2)\right]\mathrm dx\\ &=\int\limits_0^1 \left[\zeta(2)-\operatorname{Li}_2(1-x)\right]\left[\frac{\operatorname{Li}_2(x)}x-\zeta(2)\right]\frac{\mathrm dx}{1-x}\\ &=\int\limits_0^1 \left[\zeta(2)-\operatorname{Li}_2(x)\right]\left[\frac{\operatorname{Li}_2(1-x)}{1-x}-\zeta(2)\right]\frac{\mathrm dx}x\\ &=\int\limits_0^1 \left[\frac{\zeta(2)}x-\frac{\operatorname{Li}_2(x)}x\right]\left[\frac{\operatorname{Li}_2(1-x)}{1-x}-\zeta(2)\right]\mathrm dx \end{align}$$ I want to point out the quite interesting one could say "almost"-symmetry of the two integrals $$\begin{align} \mathfrak{I}_1&=\int\limits_0^1 \left[\frac{\zeta(2)}{1-x}-\frac{\operatorname{Li}_2(1-x)}{1-x}\right]\left[\frac{\operatorname{Li}_2(x)}x-\zeta(2)\right]\mathrm dx\\ \mathfrak{I}_2&=\int\limits_0^1 \left[\frac{\zeta(2)}x-\frac{\operatorname{Li}_2(x)}x\right]\left[\frac{\operatorname{Li}_2(1-x)}{1-x}-\zeta(2)\right]\mathrm dx \end{align}$$ which might be helpful for the actual evaluation. But from hereon I have no clue how to continue. Just multplying the two brackets out does not seem like a good idea to me hence one the one hand it is not elegant at all and on the other hand it would lead to to the term $\operatorname{Li}_2(x)\operatorname{Li}_2(1-x)$ for which I have no idea how to deal with concerning the fact that I am not used to e.g. double series and their evaluation. I also tried various ways of Integration by Parts but this seems to be pointless since all variations ended up in producing a divergent term $-$ unless I have missed a sepcial choice of $u$ and $\mathrm v$. I have not figured out a suitable substitution nor an appropriate newly introduced parameter $($for the application of Feynman's Trick$)$ and the I do not know whether a series expansion would be helpful or not $($connected with this issue is the possibility of a double summation with which I cannot really deal$)$. Thus, I am asking for the closed-form evaluation of $(1)$ hopefully equal to the given value $($which works out numerically according to WolframAlpha$)$. Even though I have troubles with double series I would accept an answer invoking these notwithstanding that I would appreciate a solution without involving them. Hence this integral appeared within a collocation of Analysis Probelms I am rather sure that it has been already evaluated somewhere $($maybe even here on MSE where I was not able to find it$)$. Thanks in advance!
Inner automorphism of a group $G$ An automorphism $\phi$ such that $$ \phi(x) = g^{-1} x g $$ for a certain fixed element $g \in G$: that is, $\phi$ is conjugation by $g$. The set of all inner automorphisms of $G$ forms a normal subgroup $\mathrm{Inn}(G)$ in the group $\mathrm{Aut}(G)$ of all automorphisms of $G$; this subgroup is isomorphic to $G / Z(G)$, where $Z(G)$ is the centre of $G$ (cf. Centre of a group). Automorphisms that are not inner are called outer automorphisms. The outer automorphism group is the quotient $\mathrm{Out}(G) = \mathrm{Aut}(G)/\mathrm{Inn}(G)$. Other relevant concepts include those of an inner automorphism of a monoid (a semi-group with a unit element) and an inner automorphism of a ring (associative with a unit element), which are introduced in a similar way using invertible elements. Comments Let $\mathfrak{g}$ be a Lie algebra and $x \in \mathfrak{g}$ an element of $\mathfrak{g}$ for which $\mathrm{ad}(x) : y \mapsto [x,y]$ is nilpotent. Then $$ \exp(\mathrm{ad}(x)) = \mathrm{id} + \mathrm{ad}(x) + \frac{1}{2!}\mathrm{ad}(x)^2 + \cdots $$ defines an automorphism of $\mathfrak{g}$. Such an automorphism is called an inner automorphism of $\mathfrak{g}$. More generally, the elements in the group $\mathrm{Int}(\mathfrak{g})$ generated by them are called inner automorphisms. It is a normal subgroup of $\mathrm{Aut}(\mathfrak{g})$. If $G$ is a real or complex Lie group with semi-simple Lie algebra, then the inner automorphisms constitute precisely the identity component of the group $\mathrm{Aut}(\mathfrak{g})$ of automorphisms of $\mathfrak{g}$. References [a1] M. Hall jr., "The theory of groups" , Macmillan (1959) [a2] J.E. Humphreys, "Introduction to Lie algebras and representation theory" , Springer (1972) pp. §5.4 [a3] J.-P. Serre, "Algèbres de Lie semi-simples complexes" , Benjamin (1966) How to Cite This Entry: Inner automorphism. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Inner_automorphism&oldid=35127
Is there a name for the following problem? Is there a measure of quality of a given solution? How can we even define what a proper solution is? Context: I want to detect long lines (most lines over 1000 pixel long) in an image. It is known that most of the lines have similar alignment. To detect those I use the Standard Hough Transform. To detect long lines I need a high angle resolution of the accumulator. Now the idea is not to use an equidistant partition of the angle axis but to create a new partition based on the number of votes for each angle. The Problem: Given weights $$a_0, \ldots, a_{n-1} \in [0,1] \text{ with } \sum_{k=0}^{n-1}a_k = 1$$ which correspond to the equidistant partition of the interval $[0,1[$$$0, \frac{1}{n}, \frac{2}{n}, ..., \frac{n-1}{n},$$ return a new partition with less or equal elements which is "finer" around points with larger weights and "coarser" around points with smaller weights. E.g. given the weights $(0.75, 0, 0, 0.25)$ on the partition $(0, 0.25, 0.5, 0.75)$ the result should be a new partition like $(-0.083, 0.0, 0.083, 0.75)$. As $0$ seems to be a point of interest there are three points near it which reflects that. My solution: The idea is to treat the vector of weights as a density function and to use the quantile function of its cumulative distribution function (cdf). Example: Given the weights $(0, 0.5, 0, 0.5, 0)$ we construct the density function: Then the cdf looks like this: Now we shift the given equidistant partition by $+1/(2n) = 1/10$ and put its values into the quantile function, i.e. we put those values on the $y$-axis and search for corresponding $x$-values of the cdf as marked by the blue lines. $y=0.5$ is omitted since it "cannot decide" which peak it contributes to. The so obtained $x$-values are shifted by $-1/(2n) = -1/10$ and returned as the new partition. The returned partition in this example is $(0.14, 0.22, 0.58, 0.66)$. Here is a C++ function which performes those tasks, where the vector "weights" represents the input weights and "sample" contains the new partition after execution: #include <vector>using std::vector;void weighted_partition(vector<double>& sample, const vector<double>& weights) { const int n = (int)weights.size(); double y; double F = 0.0; int i = 0; for (int j = 1; j < 2*n; j += 2) { y = (double)j/(2*n); while (F < y) { F += weights[i]; ++i; } if (F == y) { continue; } --i; F -= weights[i]; sample.push_back((y - F + weights[i] * i) / (weights[i] * n) - 1.0/(2.0 * n)); }} This algorithm seems to return solutions which are intuitively correct but how can you define "correct"?
You can check that there exists a potential function, $U$, such that $\mathbf{F} = -\nabla U $, since $\nabla \wedge \mathbf{F} = 0$ (then, $\mathbf{F}$ is known as a force field). You can also verify that $U$ is given by: $$ U(x,y) =- \frac{x^2}{y},$$ which is continuous in its domain, which is the same as the domain of $\mathbf{F}$. Therefore, you can simply compute the work done by the (conservative) force $\mathbf{F}$ as the difference of potential between the two points $A(-1,1)$ and $B(3,2)$, that is to say: $$\color{blue}{W_{A\to B} = U(A)-U(B)}$$ Hope this helps. Cheers! Some thoughts: An example of such a force which is not defined somewhere would be the force created by a punctual mass $M$ actuating on a mass $m$, which is given by: $$ \mathbf{F} = - G \frac{Mm}{|\mathbf{r}|^3} \mathbf{r}.$$ The gravitational potential, $\phi = G M m/|\mathbf{r}|^2$ is not defined at $(x,y,z) = (0,0,0)$ but we can still compute the work done by gravity when actuating along a certain path. We call this potential energy.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...