text
stringlengths 256
16.4k
|
---|
Emergence of large densities and simultaneous blow-up in a two-species chemotaxis system with competitive kinetics
School of Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
$ \begin{eqnarray*} \left\{\begin{array}{lll} u_t = \Delta u-\chi_1\nabla\cdot(u\nabla w)+\mu_1u(1-u-a_1v),\ \ \ &x\in \Omega,\ t>0,\\ v_t = \Delta v-\chi_2\nabla\cdot(v\nabla w)+\mu_2v(1-v-a_2u),\ \ &x\in \Omega,\ t>0,\\ w_t = \Delta w-w+u+v,\ \ &x\in \Omega,\ t>0 \end{array}\right. \end{eqnarray*} $
$ \Omega\subset\mathbb{R}^n $
$ n\geq3 $
$ \chi_i, \mu_i, a_i>0 $
$ i = 1, 2 $ Mathematics Subject Classification:Primary: 35B44; Secondary: 92C17, 35K55. Citation:Yan Li. Emergence of large densities and simultaneous blow-up in a two-species chemotaxis system with competitive kinetics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5461-5480. doi: 10.3934/dcdsb.2019066
References:
[1]
X. Bai and M. Winkler,
Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics,
[2]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[3] [4] [5] [6]
C. Conca, E. Espejo and K. Vilches,
Remarks on the blowup and global existence for a two species chemotactic Keller-Segel system in $\Bbb R^2$,
[7]
E. E. Espejo Arenas, A. Stevens and J. J. L. Velázquez,
Simultaneous finite time blow-up in a two-species model for chemotaxis,
[8] [9]
D. Horstmann,
Generalizing the Keller-Segel model: Lyapunov functionals, steady state analysis, and blow-up results for multi-species chemotaxis models in the presence of attraction and repulsion between competitive interacting species,
[10] [11] [12]
J. Lankeit,
Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source,
[13] [14] [15]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[16]
I. G. Pearce, M. A. J. Chaplain, P. G. Schofield, A. R. A. Anderson and S. F. Hubbard,
Chemotaxis-induced spatio-temporal heterogeneity in multi-species host-parasitoid systems,
[17] [18]
C. Stinner, C. Surulescu and M. Winkler,
Global weak solutions in a pde-ode system modeling multiscale cancer cell invasion,
[19] [20]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic keller-segel system with subcritical sensitivity,
[21] [22] [23] [24]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[25] [26] [27]
M. Winkler,
Emergence of large population densities despite logistic growth restrictions in fully parabolic chemotaxis systems,
[28]
M. Winkler, Finite-time blow-up in low-dimensional keller-segel systems with logistic-type superlinear degradation,
[29] [30]
Q. Zhang and Y. Li,
Global existence and asymptotic properties of the solution to a two-species chemotaxis system,
[31]
show all references
References:
[1]
X. Bai and M. Winkler,
Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics,
[2]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[3] [4] [5] [6]
C. Conca, E. Espejo and K. Vilches,
Remarks on the blowup and global existence for a two species chemotactic Keller-Segel system in $\Bbb R^2$,
[7]
E. E. Espejo Arenas, A. Stevens and J. J. L. Velázquez,
Simultaneous finite time blow-up in a two-species model for chemotaxis,
[8] [9]
D. Horstmann,
Generalizing the Keller-Segel model: Lyapunov functionals, steady state analysis, and blow-up results for multi-species chemotaxis models in the presence of attraction and repulsion between competitive interacting species,
[10] [11] [12]
J. Lankeit,
Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source,
[13] [14] [15]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[16]
I. G. Pearce, M. A. J. Chaplain, P. G. Schofield, A. R. A. Anderson and S. F. Hubbard,
Chemotaxis-induced spatio-temporal heterogeneity in multi-species host-parasitoid systems,
[17] [18]
C. Stinner, C. Surulescu and M. Winkler,
Global weak solutions in a pde-ode system modeling multiscale cancer cell invasion,
[19] [20]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic keller-segel system with subcritical sensitivity,
[21] [22] [23] [24]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[25] [26] [27]
M. Winkler,
Emergence of large population densities despite logistic growth restrictions in fully parabolic chemotaxis systems,
[28]
M. Winkler, Finite-time blow-up in low-dimensional keller-segel systems with logistic-type superlinear degradation,
[29] [30]
Q. Zhang and Y. Li,
Global existence and asymptotic properties of the solution to a two-species chemotaxis system,
[31]
[1]
Hai-Yang Jin, Tian Xiang.
Convergence rates of solutions for a two-species chemotaxis-Navier-Stokes sytstem with competitive kinetics.
[2]
Youshan Tao, Michael Winkler.
Boundedness vs.blow-up in a two-species chemotaxis system with two chemicals.
[3]
Tobias Black.
Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals.
[4] [5] [6]
Liangchen Wang, Jing Zhang, Chunlai Mu, Xuegang Hu.
Boundedness and stabilization in a two-species chemotaxis system with two chemicals.
[7]
Casimir Emako, Luís Neves de Almeida, Nicolas Vauchelet.
Existence and diffusive limit of a two-species kinetic model of chemotaxis.
[8]
Tai-Chia Lin, Zhi-An Wang.
Development of traveling waves in an interacting two-species chemotaxis model.
[9] [10]
Miaoqing Tian, Sining Zheng.
Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species.
[11]
Maria Antonietta Farina, Monica Marras, Giuseppe Viglialoro.
On explicit lower bounds and blow-up times in a model of chemotaxis.
[12]
Chueh-Hsin Chang, Chiun-Chuan Chen.
Travelling wave solutions of a free boundary problem for a two-species competitive model.
[13]
C. Brändle, F. Quirós, Julio D. Rossi.
Non-simultaneous blow-up for a quasilinear parabolic system with reaction at the boundary.
[14]
Lan Qiao, Sining Zheng.
Non-simultaneous blow-up for heat equations with positive-negative sources and coupled boundary flux.
[15]
Xinyu Tu, Chunlai Mu, Pan Zheng, Ke Lin.
Global dynamics in a two-species chemotaxis-competition system with two signals.
[16]
Masaaki Mizukami.
Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[17]
Ke Lin, Chunlai Mu.
Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source.
[18]
Tahir Bachar Issa, Rachidi Bolaji Salako.
Asymptotic dynamics in a two-species chemotaxis model with non-local terms.
[19]
Masaaki Mizukami.
Improvement of conditions for asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity.
[20]
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top] |
Semidefinite programs (SDPs) -- some of the most useful and versatileoptimization problems of the last few decades -- are often pathological: theoptimal values of the primal and dual problems may differ and may not beattained. Such SDPs are both theoretically interesting and often impossible to solve;yet, the pathological SDPs in the literature look strikingly similar. Based on our recent work \cite{Pataki:17} we characterize pathologicalsemidefinite systems by certain {\em excluded matrices}, which are easy to spotin all published examples. Our main tool is a normal (canonical) form of semidefinite systems, whichmakes their pathological behavior easy to verify. The normal form is constructed in a surprisingly simple fashion, using mostlyelementary row operations inherited from Gaussian elimination. The proofs areelementary and can be followed by a reader at the advanced undergraduate level. As a byproduct, we show how to transform any linear map acting on symmetricmatrices into a normal form, which allows us to quickly check whether the imageof the semidefinite cone under the map is closed. We can thus introduce readers to a fundamental issue in convex analysis: thelinear image of a closed convex set may not be closed, and often simpleconditions are available to verify the closedness, or lack of it.
In conic linear programming -- in contrast to linear programming -- theLagrange dual is not an exact dual: it may not attain its optimal value, orthere may be a positive duality gap. The corresponding Farkas' lemma is alsonot exact (it does not always prove infeasibility). We describe exact duals,and certificates of infeasibility and weak infeasibility for conic LPs whichare nearly as simple as the Lagrange dual, but do not rely on any constraintqualification. Some of our exact duals generalize the SDP duals of Ramana, andKlep and Schweighofer to the context of general conic LPs. Some of ourinfeasibility certificates generalize the row echelon form of a linear systemof equations: they consist of a small, trivially infeasible subsystem obtainedby elementary row operations. We prove analogous results for weakly infeasiblesystems. We obtain some fundamental geometric corollaries: an exact characterizationof when the linear image of a closed convex cone is closed, and an exactcharacterization of nice cones. Our infeasibility certificates provide algorithms to generate {\em all}infeasible conic LPs over several important classes of cones; and {\em all}weakly infeasible SDPs in a natural class. Using these algorithms we generate apublic domain library of infeasible and weakly infeasible SDPs. The status ofour instances can be verified by inspection in exact arithmetic, but they turnout to be challenging for commercial and research codes.
Conic linear programs, among them semidefinite programs, often behavepathologically: the optimal values of the primal and dual programs may differ,and may not be attained. We present a novel analysis of these pathologicalbehaviors. We call a conic linear system $Ax <= b$ {\em badly behaved} if thevalue of $\sup { < c, x > | A x <= b }$ is finite but the dual program has nosolution with the same value for {\em some} $c.$ We describe simple andintuitive geometric characterizations of badly behaved conic linear systems.Our main motivation is the striking similarity of badly behaved semidefinitesystems in the literature; we characterize such systems by certain {\emexcluded matrices}, which are easy to spot in all published examples. We show how to transform semidefinite systems into a canonical form, whichallows us to easily verify whether they are badly behaved. We prove severalother structural results about badly behaved semidefinite systems; for example,we show that they are in $NP \cap co-NP$ in the real number model of computing.As a byproduct, we prove that all linear maps that act on symmetric matricescan be brought into a canonical form; this canonical form allows us to easilycheck whether the image of the semidefinite cone under the given linear map isclosed.
We propose a very simple preprocessing algorithm for semidefiniteprogramming. Our algorithm inspects the constraints of the problem, deletesredundant rows and columns in the constraints, and reduces the size of thevariable matrix. It often detects infeasibility. Our algorithm does not rely onany optimization solver: the only subroutine it needs is Choleskyfactorization, hence it can be implemented with a few lines of code in machineprecision. We present computational results on a set of problems arising mostlyfrom polynomial optimization.
In semidefinite programming (SDP), unlike in linear programming, Farkas'lemma may fail to prove infeasibility. Here we obtain an exact, shortcertificate of infeasibility in SDP by an elementary approach: we reformulateany semidefinite system of the form Ai*X = bi (i=1,...,m) (P) X >= 0 using onlyelementary row operations, and rotations. When (P) is infeasible, thereformulated system is trivially infeasible. When (P) is feasible, thereformulated system has strong duality with its Lagrange dual for all objectivefunctions. As a corollary, we obtain algorithms to generate the constraints of {\em all}infeasible SDPs and the constraints of {\em all} feasible SDPs with a fixedrank maximal solution.
The facial reduction algorithm of Borwein and Wolkowicz and the extended dualof Ramana provide a strong dual for the conic linear program $$ (P) \sup {<c,x> | Ax \leq_K b} $$ in the absence of any constraint qualification. The facialreduction algorithm solves a sequence of auxiliary optimization problems toobtain such a dual. Ramana's dual is applicable when (P) is a semidefiniteprogram (SDP) and is an explicit SDP itself. Ramana, Tuncel, and Wolkowiczshowed that these approaches are closely related; in particular, they provedthe correctness of Ramana's dual using certificates from a facial reductionalgorithm. Here we give a clear and self-contained exposition of facial reduction, ofextended duals, and generalize Ramana's dual: -- we state a simple facial reduction algorithm and prove its correctness;and -- building on this algorithm we construct a family of extended duals when$K$ is a {\em nice} cone. This class of cones includes the semidefinite coneand other important cones.
A closed convex cone K is called nice, if the set K^* + F^\perp is closed forall F faces of K, where K^* is the dual cone of K, and F^\perp is theorthogonal complement of the linear span of F. The niceness property isimportant for two reasons: it plays a role in the facial reduction algorithm ofBorwein and Wolkowicz, and the question whether the linear image of a nice coneis closed also has a simple answer. We prove several characterizations of nice cones and show a strong connectionwith facial exposedness. We prove that a nice cone must be facially exposed; inreverse, facial exposedness with an added condition implies niceness. We conjecture that nice, and facially exposed cones are actually the same,and give supporting evidence.
Object Oriented Data Analysis is a new area in statistics that studiespopulations of general data objects. In this article we consider populations oftree-structured objects as our focus of interest. We develop improved analysistools for data lying in a binary tree space analogous to classical PrincipalComponent Analysis methods in Euclidean space. Our extensions of PCA areanalogs of one dimensional subspaces that best fit the data. Previous work wasbased on the notion of tree-lines. In this paper, a generalization of the previous tree-line notion is proposed:k-tree-lines. Previously proposed tree-lines are k-tree-lines where k=1. Newsub-cases of k-tree-lines studied in this work are the 2-tree-lines andtree-curves, which explain much more variation per principal component thantree-lines. The optimal principal component tree-lines were computable inlinear time. Because 2-tree-lines and tree-curves are more complex, they arecomputationally more expensive, but yield improved data analysis results. We provide a comparative study of all these methods on a motivating data setconsisting of brain vessel structures of 98 subjects.
This study introduces a new method of visualizing complex tree structuredobjects. The usefulness of this method is illustrated in the context ofdetecting unexpected features in a data set of very large trees. The majorcontribution is a novel two-dimensional graphical representation of each tree,with a covariate coded by color. The motivating data set contains threedimensional representations of brain artery systems of 105 subjects. Due toinaccuracies inherent in the medical imaging techniques, issues with thereconstruction algo- rithms and inconsistencies introduced by manualadjustment, various discrepancies are present in the data. The proposedrepresentation enables quick visual detection of the most common discrepancies.For our driving example, this tool led to the modification of 10% of the arterytrees and deletion of 6.7%. The benefits of our cleaning method aredemonstrated through a statistical hypothesis test on the effects of aging onvessel structure. The data cleaning resulted in improved significance levels.
We propose a very simple preconditioning method for integer programmingfeasibility problems: replacing the problem b' <= Ax <= b, x \in Z^n with b' <=AUy <= b, y \in Z^n, where U is a unimodular matrix computed via basisreduction, to make the columns of $AU$ short and nearly orthogonal. Thereformulation is called rangespace reformulation. It is motivated by thereformulation technique proposed for equality constrained IPs by Aardal,Hurkens and Lenstra. We also study a family of IP instances, calleddecomposable knapsack problems (DKPs). DKPs generalize the instances proposedby Jeroslow, Chvatal and Todd, Avis, Aardal and Lenstra, and Cornuejols et al.DKPs are knapsack problems with a constraint vector of the form $pM + r, $ with$p >0$ and $r$ integral vectors, and $M$ a large integer. If the parameters aresuitably chosen in DKPs, we prove 1) hardness results for these problems, whenbranch-and-bound branching on individual variables is applied; 2) that they areeasy, if one branches on the constraint $px$ instead; and 3) that branching onthe last few variables in either the rangespace- or the AHL-reformulations isequivalent to branching on $px$ in the original problem. We also providerecipes to generate such instances. Our computational study confirms that thebehavior of the studied instances in practice is as predicted by thetheoretical results.
The classical branch-and-bound algorithm for the integer feasibility problemhas exponential worst case complexity. We prove that it is surprisingly efficient on reformulated problems, in whichthe columns of the constraint matrix are short, and near orthogonal, i.e. areduced basis of the generated lattice; when the entries of A (the dense partof the constraint matrix) are from {1, ..., M} for a large enough M,branch-and-bound solves almost all reformulated instances at the rootnode. Wealso prove an upper bound on the width of the reformulations along the lastunit vector. The analysis builds on the ideas of Furst and Kannan to bound the number ofintegral matrices for which the shortest vectors of certain lattices are long,and also uses a bound on the size of the branch-and-bound tree based on thenorms of the Gram-Schmidt vectors of the constraint matrix. We explore practical aspects of these results. First, we compute numericalvalues of M which guarantee that 90, and 99 percent of the reformulatedproblems solve at the root: these turn out to be surprisingly small when theproblem size is moderate. Second, we confirm with a computational study thatrandom integer programs become easier, as the coefficients grow.
The active field of Functional Data Analysis (about understanding thevariation in a set of curves) has been recently extended to Object OrientedData Analysis, which considers populations of more general objects. Aparticularly challenging extension of this set of ideas is to populations oftree-structured objects. We develop an analog of Principal Component Analysisfor trees, based on the notion of tree-lines, and propose numerically fast(linear time) algorithms to solve the resulting optimization problems. Thesolutions we obtain are used in the analysis of a data set of 73 individuals,where each data object is a tree of blood vessels in one person's brain.
We prove that the subset sum problem has a polynomial time computablecertificate of infeasibility for all $a$ weight vectors with density at most$1/(2n)$ and for almost all integer right hand sides. The certificate isbranching on a hyperplane, i.e. by a methodology dual to the one explored byLagarias and Odlyzko; Frieze; Furst and Kannan; and Coster et. al. The proof has two ingredients. We first prove that a vector that is nearparallel to $a$ is a suitable branching direction, regardless of the density.Then we show that for a low density $a$ such a near parallel vector can becomputed using diophantine approximation, via a methodology introduced by Frankand Tardos. We also show that there is a small number of long intervals whose disjointunion covers the integer right hand sides, for which the infeasibility isproven by branching on the above hyperplane.
We show that in a knapsack feasibility problem an integral vector $p$, whichis short, and near parallel to the constraint vector gives a branchingdirection with small integer width. We use this result to analyze two computationally efficient reformulationtechniques on low density knapsack problems. Both reformulations have aconstraint matrix with columns reduced in the sense of Lenstra, Lenstra, andLov\'asz. We prove an upper bound on the integer width along the last variable,which becomes 1, when the density is sufficiently small. In the proof we extract from the transformation matrices a vector which isnear parallel to the constraint vector $a.$ The near parallel vector is a goodbranching direction in the original knapsack problem, and this transfers to thelast variable in the reformulations.
We prove several inequalities on the determinants of sublattices inLLL-reduced bases. They generalize the inequalities on the length of theshortest vector proven by Lenstra, Lenstra, and Lovasz, and show thatLLL-reduction finds not only a short vector, but more generally, sublatticeswith small determinants. We also prove new upper bounds on the product of thenorms of the first few vectors. |
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
April 2000 , Volume 6 , Issue 2
Select all articles
Export/Reference:
Abstract:
We improve Mather's proof on the existence of the connecting orbit around rotation number zero (Proposition 8.1 in [7]) in this paper. Our new proof not only assures the existences of the connecting orbit, but also gives a quantitative estimation on the diffusion time.
Abstract:
In this paper, we will study a simple, piecewise linear model which mimics the transformation in a chaotic electrical circuit: R-L-Diode driven by a sinusoïdal voltage source. This map leads to a complicated chaotic structure, with infinitly many distinct, prime homoclinic points. We prove here that there are infinitly many distinct homoclinic points. Their dynamical classification is not completely understood. They are derived through a nonlinear ($-+$) map, built with piecewise linear pieces. To different two sequences, should correspond two distinct prime homoclinic points. We have derived, we believe, the basic phenomena leading to the complicated dynamics.
Abstract:
The equation of motion of a magnetically stabilized satellite in the plane of a circular polar orbit is studied through qualitative methods. Sufficient uniqueness conditions and bilateral bounds for odd periodic solutions are found. A solution with the largest amplitude is indicated and a criterion for its stability is obtained.
Abstract:
After introducing a coding definition for the entropy of a piecewise monotone transformation with discontinuities, an algorithm is presented by which the entropy can be determined by the symbol sequences of the separation points.
Abstract:
We study the finite speed of propagation of the Cauchy-Dirichlet problem for the porous media equation with absorption or convection terms in the strip $\mathfrak R_k^N\times (0,T)$, where $\mathfrak R_k^N=\mathfrak R^N\cap\{x_1, ..., x_k>0\}$, $1\leq k\leq N $ and we find new upper bounds of the free boundary.
Finally, we consider the case of higher order parabolic equations.
Abstract:
This paper deals with existence and regularity of positive solutions of sublinear equations of the form $-\Delta u + b(x)u =\lambda f(u)$ in $\Omega$ where either $\Omega\in R^N$ is a bounded smooth domain in which case we consider the Dirichlet problem or $\Omega =R^N$, where we look for positive solutions, $b$ is not necessarily coercive or continuous and $f$ is a real function with sublinear growth which may have certain discontinuities. We explore the method of lower and upper solutions associated with some subdifferential calculus.
Abstract:
We consider the notion of shift tangent vector introduced in [7] for real valued BV functions and introduced in [9] for vector valued BV functions. These tangent vectors act on a function $u\in L^1$ shifting horizontally the points of its graph at different rates, generating in such a way a continuous path in $L^1$. The main result of [7] is that if the semigroup $\mathcal S$ generated by a scalar strictly convex conservation law is shift differentiable, i.e. paths generated by shift tangent vectors at $u_0$ are mapped in paths generated by shift tangent vectors at $\mathcal S_t u_0$ for almost every $t\geq 0$. This leads to the introduction of a sort of differential, the "shift differential", of the map $u_0 \to \mathcal S_t u_0$.
In this paper, using a simple decomposition of $u\in $BV in terms of its derivative, we extend the results of [9] and we give a unified definition of shift tangent vector, valid both in the scalar and vector case. This extension allows us to study the shift differentiability of the flow generated by a hyperbolic system of conservation laws.
Abstract:
Let $\sigma:\Sigma\to\Sigma$ be a topologically mixing shift of finite type. If $G$ is a group, and $\beta:\Sigma\to G$ is a continuous function, denote by $\sigma_\beta$ the skew-product of $\sigma$ by $\beta$. If $G$ is $\mathbb R^n$, we show examples of continuous multiparameters families of functions $\beta$ for which the skew-products $\sigma_\beta$ are topologically transitive for sets of parameters of full measure. If $G$ is a connected semisimple matrix Lie group, we show examples of functions $\beta$ for which the skew-products $\sigma_beta$ are topologically transitive.
Abstract:
In this paper we study the boundary value problem for the Hamilton-Jacobi-Isaacs equation of pursuit-evasion differential games with state constraints. We prove existence of a continuous viscosity solution and a comparison theorem that we apply to establish uniqueness of such a solution and its uniform approximation by solutions of discretized equations.
Abstract:
Let $F:(M,\omega) \mapsto (M,\omega)$ be a smooth symplectic diffeomorphism with a fixed point a and a heteroclinic orbit in the sense of been in the intersection of the central stable and the central unstable manifolds of the fixed point. It is studied the case when the tangent space of a point in the heteroclinic orbit is the direct sum of three subspaces. The first one is the characteristic bundle of the central stable manifold of $\mathbf a$ the second one is the characteristic bundle of the central unstable manifold of $\mathbf a$, and the third one is tangent to the intersection of the central stable and unstable manifolds.
In this situation, the homoclinic map $\Lambda$ is a smooth and symplectic diffeomorphism of open subsets of the central manifold of $\mathbf a$.
Moreover, if an invariant circle intersects the domain of definition of $\Lambda$ and its image intersects other circle, there are orbits that wander from one circle to the other. This phenomenon is similar to the Arnold diffusion.
The Melnikov Method gives sufficient conditions for the existence of homoclinic maps, and non identity homoclinic maps in a perturbation of a Hamiltonian system.
Abstract:
In this paper, we establish the existence of inertial sets for a class of wave equations in which the coefficient of the second order time derivative is $\varepsilon$. We show that the fractal dimension of these inertial sets does not depend on $\varepsilon$ for $\varepsilon$ small enough. We then compare the asymptotic behavior of the problem (as $\varepsilon\to 0$) through a continuity like property of the inertial sets. The autonomous case and nonautonomous case are studied.
Abstract:
We are concerned with the Riemann problem for the two-dimensional compressible Euler equations in gas dynamics. This paper is a continuation of our program (see [CY1,CY2]) in studying the interaction of nonlinear waves in the Riemann problem. The central point in this issue is the dynamical interaction of shock waves, centered rarefaction waves, and contact discontinuities that connect two neighboring constant initial states in the quadrants. In this paper we focus mainly on the interaction of contact discontinuities, which consists of two genuinely different cases. For each case, the structure of the Riemann solution is analyzed by using the method of characteristics, and the corresponding numerical solution is illustrated via contour plots by using the upwind averaging scheme that is second-order in the smooth region of the solution developed in [CY1]. For one case, the four contact discontinuities role up and generate a vortex, and the density monotonically decreases to zero at the center of the vortex along the stream curves. For the other, two shock waves are formed and, in the subsonic region between two shock waves, a new kind of nonlinear hyperbolic waves (called smoothed Delta-shock waves) is observed.
Abstract:
In this paper, optimal control problems for semilinear parabolic equations with distributed and boundary controls are considered. Pointwise constraints on the control and on the state are given. Main emphasis is laid on the discussion of second order sufficient optimality conditions. Sufficiency for local optimality is verified under different assumptions imposed on the dimension of the domain and on the smoothness of the given data.
Abstract:
In this paper, by using a trace theorem in the theory of functions of bounded variation, we prove the existence of absolutely continuous invariant measures for a class of piecewise expanding mappings of general bounded domains in any dimension.
Abstract:
We give a definition of bound set for a very general boundary value problem that generalizes those already known in literature. We then find sufficient conditions for the intersection of the sublevelsets of a family of scalar functions to be a bound set for the Floquet boundary value problem. Indeed, we distinguish the two cases of locally Lipschitz continuous and only continuous scalar functions.
Abstract:
We prove the multiplicity of periodic solutions to second order ordinary differential equations in $\mathbb R^2$ with nonlinearities crossing the two first eigenvalues of the differential operator.
Abstract:
We study the global well-posedness of the Cauchy problem for the KP II equation. We prove the global well-posedness in the inhomogeneous-homogeneous anisotropic Sobolev spaces $H_{x,y}^{-1/78+\epsilon,0}\cap H_{x,y}^{-17/144,0}$. Though we require the use of the homogeneous Sobolev space of negative index, we obtain the global well-posedness below $L^2$.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Definition:Limit of Sequence/Real Numbers Definition
Let $\sequence {x_n}$ be a sequence in $\R$.
Let $\sequence {x_n}$ converge to a value $l \in \R$.
Then $l$ is a limit of $\sequence {x_n}$ as $n$ tends to infinity. This is usually written: $\displaystyle l = \lim_{n \mathop \to \infty} x_n$ Also see Also known as
A
limit of $\sequence {x_n}$ as $n$ tends to infinity can also be presented more tersely as a limit of $\sequence {x_n}$ or even just limit of $x_n$. Some sources present $\displaystyle \lim_{n \mathop \to \infty} x_n$ as $\lim_n x_n$. |
Question:
Let {eq}f(x)= 0, if, x < 0, x, if, 0 \leq x \leq 10, 20-x ,if, 10 < x \leq 20, 0, if, x > 20; {/eq}
and {eq}g(x)=\int_0^xf(t)dt. {/eq}
Find an expression for {eq}g(x) when 10 < x < 20 . {/eq}
Answer Options:
1. {eq}g(x)=20x-\frac {1}{2}x^2-50 {/eq}
2. {eq}g(x)=20x-\frac {1}{2}x^2-100 {/eq}
3. {eq}g(x)=20x-\frac {1}{2}x^2 {/eq}
Integration:
Integration is the process of finding the function f(x) given the function f'(x). For finding the function g(x) when x is between 10 and 20 we have to integrate f(x) over the range 0 to x for that value of function f(x) for which x is between 10 and 20.
Answer and Explanation:
{eq}g(x)=\int_0^xf(t)dt\\ g(x) = \int_0^x (20 -t) dt \\ g(x) = (20t - \frac{t^2}{2})_0^x\\ g(x) = 20x - \frac{x^2}{2} {/eq}
Thus option (3) is correct.
Become a member and unlock all Study Answers
Try it risk-free for 30 daysTry it risk-free
Ask a question
Our experts can answer your tough homework and study questions.Ask a question Ask a question
Search Answers Learn more about this topic:
from AP Calculus AB & BC: Homework Help ResourceChapter 13 / Lesson 13 |
Definition:Relative Complement Contents Definition
Then the set difference $S \setminus T$ can be written $\relcomp S T$, and is called the
relative complement of $T$ in $S$, or the complement of $T$ relative to $S$.
Thus:
$\relcomp S T = \set {x \in S : x \notin T}$ Also known as
Some authors call this
the complement and use the term relative complement for the set difference $S \setminus T$ when the stipulation $T \subseteq S$ is not required. Others emphasize the connection with set difference by referring to the relative complement as a proper difference.
Thus, in this view, the
relative complement is a specific case of a set difference. Some sources do away with the word relative, and refer just to the complement of $T$ in $S$ Different notations for $\relcomp S T$ mainly consist of variants of the $\complement$: $\map {\mathcal C_S} T$ $\map {\mathbf C_S} T$ $\map {c_S} T$ $\map {C_S} T$ $\operatorname C_S \paren T$
or sometimes:
$\map {T\,^c} S$ $\map {T\,^\complement} S$
... and sometimes the brackets are omitted:
$C_S T$ Some sources do not bother to introduce a specific notation for the relative complement, and instead just use the various notation for set difference: $S \setminus T$ $S / T$ $S - T$
Then the relative complement of $E$ in $\Z$:
$\complement_\Z \paren E$ Also see Results about relative complementcan be found here.
The word
complement comes from the idea of complete-ment, it being the thing needed to complete something else.
It is a common mistake to confuse the words
complement and compliment. Usually the latter is mistakenly used when the former is meant.
The $\LaTeX$ code for \(\relcomp {S} {T}\) is
\relcomp {S} {T} .
This is a custom construct which has been set up specifically for the convenience of the users of $\mathsf{Pr} \infty \mathsf{fWiki}$.
Note that there are two arguments to this operator: the subscript, and the part between the brackets.
If either part is a single symbol, then the braces can be omitted, for example:
\relcomp S T
Sources 1960: Paul R. Halmos: Naive Set Theory... (previous) ... (next): $\S 5$: Complements and Powers 1964: Steven A. Gaal: Point Set Topology... (previous) ... (next): Introduction to Set Theory: $1$. Elementary Operations on Sets 1965: J.A. Green: Sets and Groups... (previous) ... (next): $\S 1.6$. Difference and complement 1965: Seth Warner: Modern Algebra... (previous) ... (next): $\S 3$ 1972: A.G. Howson: A Handbook of Terms used in Algebra and Analysis... (previous) ... (next): $\S 2$: Sets and functions: Sets 1972: A.G. Howson: A Handbook of Terms used in Algebra and Analysis... (previous) ... (next): $\S 2$: Sets and functions: Sets 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 1$. Sets; inclusion; intersection; union; complementation; number systems 1975: Bert Mendelson: Introduction to Topology(3rd ed.) ... (previous) ... (next): Chapter $1$: Theory of Sets: $\S 3$: Set Operations: Union, Intersection and Complement 1975: W.A. Sutherland: Introduction to Metric and Topological Spaces... (previous) ... (next): Notation and Terminology 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): $\S 6$: Subsets 1986: Geoffrey Grimmett and Dominic Welsh: Probability: An Introduction... (previous) ... (next): $\S 1.2$: Outcomes and events: Footnote 1996: H. Jerome Keisler and Joel Robbin: Mathematical Logic and Computability... (previous) ... (next): Appendix $\text{A}.2$: Boolean Operations 2005: René L. Schilling: Measures, Integrals and Martingales... (previous) ... (next): $\S 2$ 2011: Robert G. Bartle and Donald R. Sherbert: Introduction to Real Analysis(4th ed.) ... (previous) ... (next): $\S 1.1$: Sets and Functions |
ISSN:
1930-8337
eISSN:
1930-8345
All Issues
Inverse Problems & Imaging
February 2012 , Volume 6 , Issue 1
Select all articles
Export/Reference:
Abstract:
We derive asymptotic expansions for the displacement at the boundary of a smooth, elastic body in the presence of small inhomogeneities. Both the body and the inclusions are allowed to be anisotropic. This work extends prior work of Capdeboscq and Vogelius (
Math. Modeling Num. Anal.37, 2003) for the conductivity case. In particular, we obtain an asymptotic expansion of the difference between the displacements at the boundary with and without inclusions, under Neumann boundary conditions, to first order in the measure of the inclusions. We impose no geometric conditions on the inclusions, which need only be measurable sets. The first-order correction contains a moment or polarization tensor $\mathbb{M}$ that encodes the effect of the inclusions. We also derive some basic properties of this tensor $\mathbb{M}$. In the case of thin, strip-like, planar inhomogeneities we obtain a formula for $\mathbb{M}$ only in terms of the elasticity tensors, which we assume strongly convex, their inverses, and a frame on the curve that supports the inclusion. We prove uniqueness of $\mathbb{M}$ in this setting and recover the formula previously obtained by Beretta and Francini ( SIAM J. Math. Anal., 38, 2006). Abstract:
We investigate the problem of determining the stationary temperature field on an inclusion from given Cauchy data on an accessible exterior boundary. On this accessible part the temperature (or the heat flux) is known, and, additionally, on a portion of this exterior boundary the heat flux (or temperature) is also given. We propose a direct boundary integral approach in combination with Tikhonov regularization for the stable determination of the temperature and flux on the inclusion. To determine these quantities on the inclusion, boundary integral equations are derived using Green's functions, and properties of these equations are shown in an $L^2$-setting. An effective way of discretizing these boundary integral equations based on the Nyström method and trigonometric approximations, is outlined. Numerical examples are included, both with exact and noisy data, showing that accurate approximations can be obtained with small computational effort, and the accuracy is increasing with the length of the portion of the boundary where the additionally data is given.
Abstract:
In this work, we are concerned with the inverse scattering by obstacles for the linearized, homogeneous and isotropic elastic model. We study the uniqueness issue of detecting smooth obstacles from the knowledge of elastic far field patterns. We prove that the 'pressure' parts of the far field patterns over all directions of measurements corresponding to all 'pressure' (or all 'shear') incident plane waves are enough to guarantee uniqueness. We also establish that the shear parts of the far field patterns corresponding to all the 'shear' (or all 'pressure') incident waves are also enough. This shows that any of the two different types of waves is enough to detect obstacles at a fixed frequency. The proof is reconstructive and it can be used to set up an algorithm to detect the obstacle from the mentioned data.
Abstract:
Diffusion Kurtosis Imaging (DKI) is a new Magnetic Resonance Imaging (MRI) model to characterize the non-Gaussian diffusion behavior in tissues. In reality, the term $bD_{app}-\frac{1}{6}b^2D_{app}^2K_{app}$ in the extended Stejskal and Tanner equation of DKI should be positive for an appropriate range of $b$-values to make sense physically. The positive definiteness of the above term reflects the signal attenuation in tissues during imaging. Hence, it is essential for the validation of DKI.
In this paper, we analyze the positive definiteness of DKI. We first characterize the positive definiteness of DKI through the positive definiteness of a tensor constructed by diffusion tensor and diffusion kurtosis tensor. Then, a conic linear optimization method and its simplified version are proposed to handle the positive definiteness of DKI from the perspective of numerical computation. Some preliminary numerical tests on both synthetical and real data show that the method discussed in this paper is promising.
Abstract:
Inverse obstacle scattering aims to extract information about distant and unknown targets using wave propagation. This study concentrates on a two-dimensional setting using time-harmonic acoustic plane waves as incident fields and taking the obstacles to be sound-hard with smooth or polygonal boundary. Measurement data is simulated by sending one incident wave towards the area of interest and computing the far field pattern (1) on the whole circle of observation directions, (2) only in directions close to backscattering, and (3) only in directions close to forward-scattering. A variant of the enclosure method is introduced, based on applying the far field operator to an explicitly constructed density, yielding information about the convex hull of the obstacle. The numerical evidence presented suggests that the convex hull of obstacles can be approximately recovered from noisy limited-aperture far field data.
Abstract:
We propose a novel framework for energy-based multiphase segmentation over multiple channels. The framework allows the user to combine the information from each channel as the user sees fit, and thus allows the user to define how the information from each channel should influence the result. The framework extends the two-phase Logic Framework [J. Vis. Commun. Image R. 16 (2005) 333-358] model. The logic operators of the Logic Framework are used to define objective functions for multiple phases and a condition is defined that prevents conflict between energy terms. This condition prevents local minima that may occur using ad hoc methods, such as summing the objective functions of each region.
Abstract:
We propose three fast algorithms for solving the inverse problem of the thermoacoustic tomography corresponding to certain acquisition geometries. Two of these methods are designed to process the measurements done with point-like detectors placed on a circle (in 2D) or a sphere (in 3D) surrounding the object of interest. The third inversion algorithm works with the data measured by the integrating line detectors arranged in a cylindrical assembly rotating around the object. The number of operations required by these techniques is equal to $\mathcal{O}(n^{3} \log n)$ and $\mathcal{O}(n^{3} \log^2 n)$ for the 3D techniques (assuming the reconstruction grid with $n^3$ nodes) and to $\mathcal{O}(n^{2} \log n)$ for the 2D problem with $n \times n$ discretizetion grid. Numerical simulations show that on large computational grids our methods are at least two orders of magnitude faster than the finite-difference time reversal techniques. The results of reconstructions from real measurements done by the integrating line detectors are also presented, to demonstrate the practicality of our algorithms.
Abstract:
The Landweber scheme is widely used in various image reconstruction problems. In previous works, $\alpha,\beta$-rule is suggested to stop the Landweber iteration so as to get proper iteration results. The order of convergence of discrepancy principal (DP rule), which is a special case of $\alpha,\beta$-rule, with constant relaxation coefficient $\lambda$ satisfying $0<\lambda\sigma_1^2<1,~(\|A\|_{V,W}=\sigma_1>0)$ has been studied. A sufficient condition for convergence of Landweber scheme is that the value $\lambda_m\sigma_1^2$ should be lied in a closed interval, i.e. $0<\varepsilon\leq\lambda_m\sigma_1^2\leq2-\varepsilon$, $(0<\varepsilon<1)$. In this paper, we mainly investigate the order of convergence of the $\alpha,\beta$-rule with variable relaxation coefficient $\lambda_m$ satisfying $0 < \varepsilon\leq\lambda_m \sigma_1^2 \leq 2-\varepsilon$. According to the order of convergence, we can conclude that $\alpha,\beta$-rule is the optimal rule for the Landweber scheme.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
If there is a discontinuity in the material such as a hole or a notch, the stress must flow around the discontinuity, and the flow lines will pack together in the vicinity of Strength of materials stress and strain discontinuity.
The concentration of stress will dissipate as we move away from the stress riser. The proper sign conventions are as shown in the figure. If you squish a tootsie roll, what happens to it? The stress at the yield point is called the yield stress.
Repeated loading often initiates brittle cracks, which grow until failure occurs. Generally, higher the range stress, the fewer the number of reversals needed for failure. Combined Stresses At any point in a loaded material, a general state of stress can be described by three normal stresses one in each direction and six shear stresses two in each direction: Combined Stresses At any point in a loaded material, a general state of stress can be described by three normal stresses one in each direction and six shear stresses two in each direction: If this line is rotated by some angle, then the values of the points at the end of the rotated line will give the values of stress on the x and y faces of the rotated element.
One measurement is called the elastic modulus, and is defined as stress divided by strain. Considered in tandem with the fact that the yield strength is the parameter that predicts plastic deformation in the material, one can make informed decisions on how to increase the strength of a material depending its microstructural properties and the desired end effect.
Two of the most comprehensive collections of stress concentration factors are Peterson's Stress Concentration Factors and Roark's Formulas for Stress and Strain.
Materials scientists developing new materials need ways to measure progress. We can talk about different types of stress, depending on how the force is applied.
How do we make sure it will last? When calculating the nominal stress, use the maximum value of stress in that area. As you might guess, this type of specialized test equipment is expensive.
We need accurate measurements of properties of existing materials so that engineers can choose the right material for a given project.
How do we choose materials? Strain is the response of a material to stress. This is called plane stress. A material that can undergo large plastic deformation before fracture is called a ductile material.
Distortion energy is the amount of energy that is needed to change the shape. Think back to jolly ranchers and tootsie rolls. Distortion energy is the amount of energy that is needed to change the shape.
A couple useful relationships are: This sudden packing together of the flow lines causes the stress to spike up -- this peak stress is called a stress concentration. Tension Test Equipment Materials scientists and mechanical engineers use specialized test equipment, like the tension test machine in the diagram above, for measuring a material's response to stress.
Of the latter three, the distortion energy theory provides most accurate results in majority of the stress conditions. The region of the stress-strain curve in which the material returns to the undeformed stress when applied forces are removed is called the elastic region.
By taking an actual product fresh from the assembly line and testing it, we can measure how the design holds up in the real world. When the geometry of the material changes, the flow lines move closer together or farther apart to accommodate.
The sub-group can then be considered a single spring with the calculated stiffness, force, and deflection, and that spring can then be considered as a part of another sub-group of springs. Is there a relationship between deflection angle and number of cycles before breakage?
Automated testing with repeated cycles of stress can yield information about how long materials can be expected to last under various envrionmental conditions. Strain cannot exist without stress. These are brittle, so they do not deform plastically before they break by fracturing.
In ductile materials, local yielding will allow for stresses to be redistributed and will reduce the stress around the riser. This important theory is also known as numeric conversion of toughness of material in the case of crack existence.Explanation: The elastic modulus is the ratio of stress and strain.
So on the stress strain curve it is the slope. Both stress and strain can be related to one another through Hook’s Law. (Eq 1) $σ=Eε$ σ = stress. ε = strain.
E = Young’s Modulus.
Stress. Stress is a very important variable in solid mechanics. It is used to determine the amount of pressure that can be put on a part before it will start to yield as well as when it will ultimately break.
Stress is proportional to the strain Stress ∝ Strain σ ∝ ε σ= Stress ε = Strain σ = k ε K = modulus of elasticity or Young’s Modulus. σ = E ε. Modulus of rigidity: Modulus of rigidity is also known as shear modulus. Within the elastic limit the ratio between shear stress and.
Strength of Material Stress and Strain Page: 1 SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER – 1 STRESS AND STRAIN Stress & Strain Stress is the internal resistance offered by the body per unit area. Stress is represented as force per unit area. Jan 12, · This video is the start of a series in engineering mechanics called strength of materials, in particular, stress and strain.
Stress and strain are crucial concepts for all engineers to understand when considering the performance and safety of a design. where \(E\) is the elastic modulus of the material, \(\sigma\) is the normal stress, and \(\epsilon\) is the normal strain. Shear stress and strain are related by: $$ \tau = G \gamma $$ where \(G\) is the shear modulus of the material, \(\tau\) is the shear stress, and \(\gamma\) is the shear strain.Download |
Difference between revisions of "Lower attic"
From Cantor's Attic
Line 7: Line 7:
* [[Gamma | $\Gamma$]]
* [[Gamma | $\Gamma$]]
* [[Church-Kleene omega_1 | $\omega_1^{ck}$]]
* [[Church-Kleene omega_1 | $\omega_1^{ck}$]]
−
* [epsilon0#epsilon_alpha | the $\epsilon_\alpha$ hierarchy]]
+
* [epsilon0#epsilon_alpha | the $\epsilon_\alpha$ hierarchy]]
* [[epsilon0#epsilon_1 | $\epsilon_1$]]
* [[epsilon0#epsilon_1 | $\epsilon_1$]]
* [[epsilon0 | $\epsilon_0$]]
* [[epsilon0 | $\epsilon_0$]]
Revision as of 20:54, 27 December 2011
Welcome to the lower attic, where we store the comparatively smaller notions of infinity. Roughly speaking, this is the realm of countable ordinals and their friends.
Up to The middle attic |
Determine the resultant couple moment of the two couples that act on the pipe assembly. The distance from A to B is d = 400 mm. Express the result as a Cartesian vector.
Solution:
Show me the final answer↓
To solve this problem, we will first express a position vector from A to B. Note that d = 400 mm as stated in the question.
Locations of points A and B are:
B:(0.35i-0.4\cos30^0+0.4\sin30^0)\,\text{m}
(Simplify the trigonometric values)B:(0.35i-0.346j+0.2k)\,\text{m}
We can now write a position vector from A to B as follows:
r_{AB}=\left\{0i-0.346j+0.2k\right\}\,\text{m}
Let us now calculate the coupling moments created by the forces applied to the pipe assembly. Looking at the diagram again, take note of the forces already expressed in Cartesian vector form.
Force 1:
M_1=\begin{bmatrix}\bold i&\bold j&\bold k\\0&-0.346&0.2\\0&0&35\end{bmatrix}
M_1=\left\{-12.11i+0j+0k\right\}\,\text{N}\cdot\text{m}
Force 2:
M_2=\begin{bmatrix}\bold i&\bold j&\bold k\\0&-0.346&0.2\\-50&0&0\end{bmatrix}
M_2=\left\{0i-10j-17.3k\right\}\,\text{N}\cdot\text{m}
The resultant coupling moment is the addition of both of these moments.
M_c=\left\{-12.11i+0j+0k\right\}+\left\{0i-10j-17.3k\right\}
M_c=\left\{-12.11i-10j-17.3k\right\}\,\text{N}\cdot\text{m}
Final Answer: |
Is there a $n-th$an $n^{\text{th}}$ root function in Mathematica?
Is there a way to find $\sqrt[n]{x}$ with
Mathematica
Mathematica beside of
x^(1/n)as this is something different, because this is not always the same$$(-1)^{\frac{2}{4}}=i \neq 1= \sqrt[4]{(-1)^2}$$In the help I only found
Sqrt[x] which is the squareroot and
CubeRoot[x] for the cubic root.
Is there a reason that there aren't $n$-th roots implemented? (Assuming they really don't exist and I am not to stupid to find them).
I am using
Mathematica 9.0.1 Student Edition
Mathematica 9.0.1 Student Edition.
Is there a $n-th$ root function in Mathematica?
Is there a way to find $\sqrt[n]{x}$ with
Mathematica beside of
x^(1/n)as this is something different, because this is not always the same$$(-1)^{\frac{2}{4}}=i \neq 1= \sqrt[4]{(-1)^2}$$In the help I only found
Sqrt[x] which is the squareroot and
CubeRoot[x] for the cubic root.
Is there a reason that there aren't $n$-th roots implemented? (Assuming they really don't exist and I am not to stupid to find them).
I am using
Mathematica 9.0.1 Student Edition
Is there an $n^{\text{th}}$ root function in Mathematica?
Is there a way to find $\sqrt[n]{x}$ with
Mathematica beside of
x^(1/n)as this is something different, because this is not always the same$$(-1)^{\frac{2}{4}}=i \neq 1= \sqrt[4]{(-1)^2}$$In the help I only found
Sqrt[x] which is the squareroot and
CubeRoot[x] for the cubic root.
Is there a reason that there aren't $n$-th roots implemented? (Assuming they really don't exist and I am not to stupid to find them).
I am using
Mathematica 9.0.1 Student Edition. |
Proof by Cases
Jump to navigation Jump to search
Proof by Cases is also known as the
Contents Sequent If we can conclude $\phi \lor \psi$, and: then we may infer $\chi$. $p \lor q, \left({p \vdash r}\right), \left({q \vdash r}\right) \vdash r$ Variants
The following forms can be used as variants of this theorem:
$\left({p \implies r}\right) \land \left({q \implies r}\right) \dashv \vdash \left({p \lor q}\right) \implies r$ $\vdash \left({\left({p \implies r}\right) \land \left({q \implies r}\right)}\right) \iff \left({\left({p \lor q}\right) \implies r}\right)$ $\vdash \left({\left({p \lor q}\right) \land \left({p \implies r}\right) \land \left({q \implies r}\right)}\right) \implies r$
Therefore, it has to follow that the truth of $\chi$ follows from the fact of the truth of either $\phi$ or $\psi$.
Proof by Cases is also known as the
rule of or-elimination. |
Tsukuba Journal of Mathematics Tsukuba J. Math. Volume 30, Number 1 (2006), 171-180. Remarks on the bordism intersection map Abstract
In this paper we give a characterization of the kernel of the bordism intersection map and we present some related results as the following. The set of bordism classes of $C^{\infty}$ maps $f : M \to N$ such that rank $df(x) \leq p$ for all $x$ is contained in $J_{p,m-p}(N)$, where $M$ is a smooth closed manifold of dimension $m$, $N$ is a smooth closed manifold, $df$ is the differential of $f$, $J_{p,m-p}(N)$ is the image of the homomorphism $\ell_{\ast}: \mathfrak{N}_{m}(N^{(p)}) \to \mathfrak{N}_{m}(N)$ induced by the inclusion, $0 \leq p \leq m$, and $N^{(p)}$ is the $p$-skeleton of $N$.
Article information Source Tsukuba J. Math., Volume 30, Number 1 (2006), 171-180. Dates First available in Project Euclid: 30 May 2017 Permanent link to this document https://projecteuclid.org/euclid.tkbjm/1496165035 Digital Object Identifier doi:10.21099/tkbjm/1496165035 Mathematical Reviews number (MathSciNet) MR2248290 Zentralblatt MATH identifier 1115.55003 Citation
Biasi, Carlos; Libardi, Alice Kimie Miwa. Remarks on the bordism intersection map. Tsukuba J. Math. 30 (2006), no. 1, 171--180. doi:10.21099/tkbjm/1496165035. https://projecteuclid.org/euclid.tkbjm/1496165035 |
The Annals of Statistics Ann. Statist. Volume 4, Number 2 (1976), 384-395. Extension of the Gauss-Markov Theorem to Include the Estimation of Random Effects Abstract
The general mixed linear model can be written $y = X\alpha + Zb$, where $\alpha$ is a vector of fixed effects and $b$ is a vector of random variables. Assume that $E(b) = 0$ and that $\operatorname{Var} (b) = \sigma^2D$ with $D$ known. Consider the estimation of $\lambda_1'\alpha + \lambda_2'\beta$, where $\lambda_1'\alpha$ is estimable and $\beta$ is the realized, though unobservable, value of $b$. Among linear estimators $c + r'y$ having $E(c + r'y) \equiv E(\lambda_1'\alpha + \lambda_2'b)$, mean squared error $E(c + r'y - \lambda_1'\alpha - \lambda_2'b)^2$ is minimized by $\lambda_1'\hat{\alpha} + \lambda_2'\hat{\beta}$, where $\hat{\beta} = DZ'V^{\tt\#}(y - X\hat{\alpha}), \hat{\alpha} = (X'V^{\tt\#}X) - X'V^{\tt\#}y$, and $V^{\tt\#}$ is any generalized inverse of $V = ZDZ'$ belonging to the Zyskind-Martin class. It is shown that $\hat{\alpha}$ and $\hat{\beta}$ can be computed from the solution to any of a certain class of linear systems, and that doing so facilitates the exploitation, for computational purposes, of the kind of structure associated with ANOVA models. These results extend the Gauss-Markov theorem. The results can also be applied in a certain Bayesian setting.
Article information Source Ann. Statist., Volume 4, Number 2 (1976), 384-395. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176343414 Digital Object Identifier doi:10.1214/aos/1176343414 Mathematical Reviews number (MathSciNet) MR398007 Zentralblatt MATH identifier 0323.62043 JSTOR links.jstor.org Citation
Harville, David. Extension of the Gauss-Markov Theorem to Include the Estimation of Random Effects. Ann. Statist. 4 (1976), no. 2, 384--395. doi:10.1214/aos/1176343414. https://projecteuclid.org/euclid.aos/1176343414 |
This vignette introduces
threshold graphs, a class of graphs with a unique centrality ranking and relevant functions from the
netrankr package to work with this class of graphs.
A threshold graph is a graph, where all nodes are pairwise comparable by neighborhood inclusion. Formally, \[ \forall u,v \in V: N(u) \subseteq N[v] \; \lor \; N(v) \subseteq N[u]. \]
According to this vignette, it is thus clear that all centrality indices induce the same ranking on a threshold graph. More technical details on threshold graphs and results related to centrality can be found in
Schoch, David & Valente, Thomas W., & Brandes, Ulrik. (2017). Correlations among centrality indices and a class of uniquely ranked graphs.
Social Networks, 50, 46-54.(link)
netrankr Package
library(netrankr)library(igraph)set.seed(1886) #for reproducibility
Threshold graphs on \(n\) vertices can be constructed iteratively with a sequence of \(0\)’s and \(1\)’s. For each \(0\), an isolated vertex is inserted and for each \(1\) a vertex that connects to all previously inserted one’s. This iterative process is implemented in the
threshold_graph function. The parameter
n is used to set the desired number of vertices. The parameter
p is the probability that a dominated vertex is inserted in each step. This parameter roughly equates to the density of the network.
g1 <- threshold_graph(500,0.4) g2 <- threshold_graph(500,0.05) c(round(graph.density(g1),2), round(graph.density(g2),2))
## [1] 0.41 0.03
The class of threshold graphs includes various well-known graphs, for instance star shaped and complete networks. This graphs can be constructed with
p=0 and
p=1 respectively.
star <- threshold_graph(6,0) complete <- threshold_graph(6,1)plot(star,vertex.label=NA,vertex.color="black")plot(complete,vertex.label=NA,vertex.color="black")
To check that all pairs are comparable by neighborhood inclusion, we can use the function
comparable_pairs. The function computes the density of the underlying undirected graph induced by the neighborhood-inclusion relation.
g <- threshold_graph(10,0.4)P <- neighborhood_inclusion(g)comparable_pairs(P)
## [1] 1
We construct a random threshold graph and calculate some standard measures of centrality that are included in the
igraph package.
g <- threshold_graph(100,0.1)cent.df <- data.frame( degree=degree(g), betweenness=betweenness(g), closeness=closeness(g), eigenvector=round(eigen_centrality(g)$vector,8), subgraph=subgraph_centrality(g))
We expect, that all indices are perfectly rank correlated since all pairs of nodes are comparable by neighborhood-inclusion.
cor.mat <- cor(round(cent.df,8),method="kendall")cor.mat <- round(cor.mat,2)cor.mat
## degree betweenness closeness eigenvector subgraph## degree 1.00 0.52 1.00 1.00 0.96## betweenness 0.52 1.00 0.52 0.52 0.50## closeness 1.00 0.52 1.00 1.00 0.96## eigenvector 1.00 0.52 1.00 1.00 0.96## subgraph 0.96 0.50 0.96 0.96 1.00
We, however, obtain correlations that are not equal to one. This is due to the definition of Kendall’s (tie corrected) \(\tau\). Before going into detail, consider the following cases which can arise when comparing two scores of indices
x and
y.
x[i]>x[j] & y[i]>y[j] or
x[i]<x[j] & y[i]<y[j]
x[i]>x[j] & y[i]<y[j] or
x[i]>x[j] & y[i]<y[j]
x[i]=x[j] & y[i]=y[j]
x[i]=x[j] & y[i]!=y[j] or
x[i]!=x[j] & y[i]=y[j]
Kendall’s \(\tau\) considers left and right ties as correlation reducing. That is if two vertices are tied in one ranking, but not the other, the correlations is weakened. Left and right ties are, however, not forbidden according to the neighborhood inclusion property. The only forbidden case are discordant pairs. That is, \(N(u)\subseteq N[v]\) can not result in \(c(u)>c(v)\) but it may result in \(c(u)=c(v)\). Also, one can argue that left and right ties distinguish between fine/coarse grained indices.
netrankr comes with a function called
compare_ranks which calculates all occurrences of the above cases. Simply counting the cases instead of aggregating them will help circumvent the problem of possibly misinterpreting correlation measures.
comp <- compare_ranks(cent.df$degree,cent.df$betweenness)unlist(comp)
## concordant discordant ties left right ## 1209 0 464 0 3277
Notice that there is a high number of right ties which influences the correlation if measured with Kendall’s \(\tau\). However, there do not exist any discordant pairs for any pair of indices.
dis.pairs <- matrix(0,5,5)dis.pairs[1,] <- apply(cent.df,2, function(x)compare_ranks(cent.df$degree,x)$discordant)dis.pairs[2,] <- apply(cent.df,2, function(x)compare_ranks(cent.df$betweenness,x)$discordant)dis.pairs[3,] <- apply(cent.df,2, function(x)compare_ranks(cent.df$closeness,x)$discordant)dis.pairs[4,] <- apply(cent.df,2, function(x)compare_ranks(cent.df$eigenvector,x)$discordant)dis.pairs[5,] <- apply(cent.df,2, function(x)compare_ranks(cent.df$subgraph,x)$discordant)dis.pairs
## [,1] [,2] [,3] [,4] [,5]## [1,] 0 0 0 0 0## [2,] 0 0 0 0 0## [3,] 0 0 0 0 0## [4,] 0 0 0 0 0## [5,] 0 0 0 0 0
Although Kendall’s \(\tau\) suggests that the correlations among indices can be low, we see that there do not exist any discordant pairs on threshold graphs.
As it is always the case with artificial graph structures, it is rather unlikely to encounter threshold graphs in the wild. The best we can hope for is to be
close to a threshold graph. This is based on the intuition that the closer a graph is to be a threshold graph, the more of its properties resemble one. The closer we are, the more correlated we assume centrality indices to be. The further away we are, the more disagreement we will find among indices. The problematic point is: how do we define being close to a threshold graph. An in depth discussion of possible measures can be found in the paper mentioned at the beginning of this vignette.
netrankr implements one function that can be used to assess the distance of arbitrary graphs to threshold graphs. The so called
majorization gap operates solely on the degree sequence and determines the number of entries that have to be changed in order to obtain the degree sequence of a threshold graph. Changing can, however, not be done arbitrarily. The only allowed operation is to lower the degree of one vertex and simultaneously increase the degree of another. For threshold graphs, this measure is obviously zero.
tg <- threshold_graph(200,0.2)majorization_gap(g)
## [1] 0
By default,
majorization_gap is normalized by the number of edges.
g <- graph.empty(n=11,directed = FALSE)g <- add_edges(g,c(1,11,2,4,3,5,3,11,4,8,5,9,5,11,6,7,6,8, 6,10,6,11,7,9,7,10,7,11,8,9,8,10,9,10))majorization_gap(g)
## [1] 0.3529412
In this example, around 35% of all entries have to be changed in order to obtain a threshold graph. The normalization is done to compare the majorization gap across networks with different sizes. To obtain the raw number of changes, set
norm=FALSE.
majorization_gap(g,norm = FALSE)
## [1] 6
The majorization gap serves as an indicator for how much variance we can expect in the rankings of different centrality indices. The lower it is, the closer we are to a threshold graph where only one ranking is possible. The further away we are, the more degrees of freedom exist to rank nodes differently and we will generally observe lower correlations. For more details, again, refer to the mentioned paper at the beginning. |
We consider Real bundle gerbes on manifolds equipped with an involution andprove that they are classified by their Real Dixmier-Douady class inGrothendieck's equivariant sheaf cohomology. We show that the Grothendieckgroup of Real bundle gerbe modules is isomorphic to twisted KR-theory for atorsion Real Dixmier-Douady class. Using these modules as building blocks, weintroduce geometric cycles for twisted KR-homology and prove that they generatea real-oriented generalised homology theory dual to twisted KR-theory for Realclosed manifolds, and more generally for Real finite CW-complexes, for any RealDixmier-Douady class. This is achieved by defining an explicit naturaltransformation to analytic twisted KR-homology and proving that it is anisomorphism. Our model both refines and extends previous results by Wang andBaum-Carey-Wang to the Real setting. Our constructions further provide a newframework for the classification of orientifolds in string theory, providingprecise conditions for orientifold lifts of H-fluxes and for orientifoldprojections of open string states.
We prove an analogue of the Hitchin-Kobayashi correspondence for compact,oriented, taut Riemannian foliated manifolds with transverse Hermitianstructure. In particular, our Hitchin-Kobayashi theorem holds on any compactSasakian manifold. We define the notion of stability for foliated Hermitianvector bundles with transverse holomorphic structure and prove that suchbundles admit a basic Hermitian-Einstein connection if and only if they arepolystable. Our proof is obtained by adapting the proof by Uhlenbeck and Yau tothe foliated setting. We relate the transverse Hermitian-Einstein equations tohigher dimensional instanton equations and in particular we look at therelation to higher contact instantons on Sasaki manifolds. For foliations ofcomplex codimension 1, we obtain a transverse Narasimhan-Seshadri theorem. Wealso demonstrate that the weak Uhlenbeck compactness theorem fails in generalfor basic connections on a foliated bundle. This shows that not every result ingauge theory carries over to the foliated setting.
We calculate the $E$-polynomials of the $SL_3(\mathbb{C})$ and$GL_3(\mathbb{C})$-character varieties of compact oriented surfaces of anygenus and the $E$-polynomials of the $SL_2(\mathbb{C})$ and$GL_2(\mathbb{C})$-character varieties of compact non-orientable surfaces ofany Euler characteristic. Our methods also give a new and significantly simplercomputation of the $E$-polynomials of the $SL_2(\mathbb{C})$-charactervarieties of compact orientable surfaces, which were computed by Logares,Mu\~noz and Newstead for genus $g=1,2$ and by Martinez and Mu\~noz for $g \ge3$. Our technique is based on the arithmetic of character varieties over finitefields. More specifically, we show how to extend the approach of Hausel andRodriguez-Villegas used for non-singular (twisted) character varieties to thesingular (untwisted) case.
Odd $K$-theory has the interesting property that it admits an infinite numberof inequivalent differential refinements. In this paper we provide a bundletheoretic model for odd differential $K$-theory using the caloroncorrespondence and prove that this refinement is unique up to a unique naturalisomorphism. We characterise the odd Chern character and its transgression formin terms of a connection and Higgs field and discuss some applications. Ourmodel can be seen as the odd counterpart to the Simons-Sullivan construction ofeven differential $K$-theory. We use this model to prove a conjecture ofTradler-Wilson-Zeinalian regarding a related differential extension of odd$K$-theory
Let $G$ be a compact, connected, simply-connected Lie group. We use theFourier-Mukai transform in twisted $K$-theory to give a new proof of the ringstructure of the $K$-theory of $G$.
We introduce a Banach Lie group $G$ of unitary operators subject to a naturaltrace condition. We compute the homotopy groups of $G$, describe its cohomologyand construct an $S^1$-central extension. We show that the central extensiondetermines a non-trivial gerbe on the action Lie groupoid $G\ltimes\mathfrak{k}$, where $\mathfrak{k}$ denotes the Hilbert space of self-adjointHilbert-Schmidt operators. With an eye towards constructing elements in twistedK-theory, we prove the existence of a cubic Dirac operator $\mathbb{D}$ in asuitable completion of the quantum Weil algebra $\mathcal{U}(\mathfrak{g})\otimes Cl(\mathfrak{k})$, which is subsequently extended to a projectivefamily of self-adjoint operators $\mathbb{D}_A$ on $G\ltimes \frak{k}$. Whilethe kernel of $\mathbb{D}_A$ is infinite-dimensional, we show that there isstill a notion of finite reducibility at every point, which suggests ageneralized definition of twisted K-theory for action Lie groupoids.
We construct the moduli space of contact instantons, an analogue ofYang-Mills instantons defined for contact metric $5$-manifolds and initiate thestudy of their structure. In the $K$-contact case we give sufficient conditionsfor smoothness of the moduli space away from reducible connections and show thedimension is given by the index of an operator elliptic transverse to the Reebfoliation. The moduli spaces are shown to be K\"ahler when the $5$-manifold $M$is Sasakian and hyperK\"ahler when $M$ is transverse Calabi-Yau. We show howthe transverse index can be computed in various cases, in particular we computethe index for the toric Sasaki-Einstein spaces $Y^{p,q}$.
In this paper, we use reduction by extended actions to give a construction oftransitive Courant algebroids from string classes. We prove that T-dualitycommutes with the reductions and thereby determine global conditions for theexistence of T-duals in heterotic string theory. In particular we find thatT-duality exchanges string structures and gives an isomorphism of transitiveCourant algebroids. Consequently we derive the T-duality transformation forgeneralised metrics and show that the heterotic Einstein equations arepreserved. The presence of string structures significantly extends the domainof applicability of T-duality and this is illustrated by several classes ofexamples.
In this paper we show that the T-duality transform of Bouwknegt, Evslin andMathai applies to determine isomorphisms of certain current algebras and theirassociated vertex algebras on topologically distinct T-dual spacetimescompactified to circle bundles with $H$-flux.
We study the structure of abelian extensions of the group $L_qG$ of$q$-differentiable loops (in the Sobolev sense), generalizing from the case ofcentral extension of the smooth loop group. This is motivated by the aim ofunderstanding the problems with current algebras in higher dimensions. Highestweight modules are constructed for the Lie algebra. The construction isextended to the current algebra of supersymmetric Wess-Zumino-Witten model. Anapplication to the twisted K-theory on $G$ is discussed.
We establish a criterion for when an abelian extension ofinfinite-dimensional Lie algebras integrates to a corresponding Lie groupextension $\hat{G}$ of $G$ by $A$, where $G$ is a connected, simply connectedLie group and $A$ is a quotient of its Lie algebra by some discrete subgroup.When $G$ is non-simply connected, the kernel $A$ is replaced by a centralextension $\hat{A}$ of $\pi_1(G)$ by $A$.
In gauge theory, the Faddeev-Mickelsson-Shatashvili anomaly arises as aprolongation problem for the action of the gauge group on a bundle ofprojective Fock spaces. In this paper, we study this anomaly from the point ofview of bundle gerbes and give several equivalent descriptions of theobstruction. These include lifting bundle gerbes with non-trivial structuregroup bundle and bundle gerbes related to the caloron correspondence.
We outline in detail the general caloron correspondence for the group ofautomorphisms of an arbitrary principal $G$-bundle $Q$ over a manifold $X$,including the case of the gauge group of $Q$. These results are used to definecharacteristic classes of gauge group bundles. Explicit but complicateddifferential form representatives are computed in terms of a connection andHiggs field.
We construct a calculus structure on the Lie conformal algebra cochaincomplex. By restricting to degree one chains, we recover the structure of ag-complex introduced in [DSK]. A special case of this construction is thevariational calculus, for which we provide explicit formulas. |
Pricing a call option with payoff function $C=\max\{S_T - S_{T/2}, 0\}$, where $S_T$ is geometric brownian motion. I appreciate any help! Please close this question if this is a duplicated question. Thanks all!
My approach is to take out $S_{T/2}$:
$$C = S_{T/2} \max\left\{\frac{S_T}{S_{T/2}} - 1, 0\right\}$$
Then we can define a risk neutral measure:
$$\begin{align} E[C] & = E\left[S_{T/2} \max\left\{\frac{S_T}{S_{T/2}} - 1, 0\right\}\right] \\[3pt] & = \tilde{E}\left[\max\left\{\frac{S_T}{S_{T/2}} - 1, 0\right\}\right] \end{align}$$
where we use $S_{T/2}$ as the numeraire. Then plug into the call option Black-Scholes formula with $S=1$, $K=1$ and $r=0$.
Is this approach correct? My main concern is, can we use $S_{T/2}$ as the numeraire? |
Contents MA 453 Fall 2008 Professor Walther News
Here are some basic pointers:
In order to do any editing, you must be logged in with your Purdue career account. If you look under MediaWiki FAQ, you get lots of instructions on how to work with Rhea. Some important things are under item 4 in that manual. If you want to do things like $ \sum_{i=1}^\infty 1/i^2 = \frac{\pi^2}{6} $ here then you should look a) at the "view source" button on this page and b) get acquainted with Latex [1], a text-formatting program designed to write math stuff.
Here is some more math, to show yo mathsymbol commands: $ \forall x\in{\mathbb R}, x^2\ge 0 $, $ \exists n\in{\mathbb N}, n^2\le 0 $ where $ {\mathbb N}=0,1,2,\ldots $
If you need to find out a latex command, google for the thing you want to make, latex, and "command". (E.g., google for "integral latex command".)
If you want to make a new page, all you need to do is to invent one. For example, let's say I want to make a page for further instructions on how to deal with Rhea. I just type "double-left-square-bracket page with more instructions double-right-square-bracket", where of course I use the actual brackets. The effect is: I get a link (initially red) to a page that is the empty set. Once I click it, the link page with more instructions_MA453Fall2008walther turns blue and I am transferred to a newborn page of name as indicated.
Note: it may take a few minutes for the new page to start existing. If you click the red link and nothing happens, wait a bit and try again.
Ideas what to put on Rhea
Course notes, HW discussion, solutions to problems you encountered while using Rhea (how do you upload, make links, post movies, ...)
For week 1, click this link here and on that new page create a page as outlined above. Then move to that page and state your favorite theorem. Why is it you favorite theorem? Have other people he same favorite theorem? Crosslink! Use the math-environment if appropriate.
For week 3: post and discuss the notion/theorem that you have found hardest to understand so far. Alternatively, find somebody else's post and reply to it by explainign how you understand things.
Rhea Questions Course notes Discussion topics Homework Discussion
Homework 1, September 4
Homework 2, September 11 Homework 3, September 18 Homework 4, September 25 Homework 5, October 2 Homework 6, October 9 Homework 7, October 23 Homework 8, October 30 Homework 9, November 6 Homework 10, November 13 Homework 11, November 20 Homework 12, December 4 Homework 13, December 11 Math News Interesting Articles Latex comments
More Latex!_MA453Fall2008walther - Latex commands from NASA! |
Heat Transfer in Fully-Developed Internal Turbulent Flow From Thermal-FluidsPedia
Line 254: Line 254:
|{{EquationRef|(28)}}
|{{EquationRef|(28)}}
|}
|}
-
Since the buffer region is also very thin,
+
Since the buffer region is also very thin, <math>1-y/{{r}_{0}}</math> in eq. (5.321) is effectively equal to 1. Defining <math>{{T}^{+}}={(\bar{T}-{{T}_{w}})}/{\left( -\frac{{{{{q}''}}_{w}}}{\rho {{c}_{p}}}\sqrt{\frac{\rho }{{{\tau }_{w}}}} \right)}\;</math> and Integrating eq. (5.321) yields
-
<math>1-y/{{r}_{0}}</math>
+ - + -
<math>{{T}^{+}}={(\bar{T}-{{T}_{w}})}/{\left( -\frac{{{{{q}''}}_{w}}}{\rho {{c}_{p}}}\sqrt{\frac{\rho }{{{\tau }_{w}}}} \right)}\;</math>
+ - +
{| class="wikitable" border="0"
{| class="wikitable" border="0"
Revision as of 03:04, 8 July 2010
Heat transfer in fully-developed turbulent flow in a circular tube subject to constant heat flux (
q'' = const) will be considered in this subsection (Oosthuizen and Naylor, 1999). When the turbulent flow in the tube is fully developed, we have and the energy eq. (5.268) becomes w
After the turbulent flow is hydrodynamically and thermally fully developed, the time-averaged temperature profile is no longer a function of axial distance from the inlet, i.e.,
where is the time-averaged temperature at the centerline of the tube, and Tw is the wall temperature. Thus, is a function of r only, i.e.,
where f is independent from x. Differentiating (5.295) yields
At the wall, the contribution of eddy diffusivity on the heat transfer is negligible, and the heat flux at the wall becomes
Substituting eq. (5.296) into eq. (5.298), one obtains:
Since the heat flux is constant,
q'' = const, it follows that , i.e., w
Therefore, eq. (5.297) becomes:
For fully developed flow, the local heat transfer coefficient is:
where is the time-averaged mean temperature defined as:
Since
q'' = const, it follows from eq. (5.302) that , i.e., w
Combining eqs. (5.300), (5.301) and (5.304), the following relationships are obtained:
The time-averaged mean temperature, , changes with x as the result of heat transfer from the tube wall. By following the same procedure as that in Example 5.2, the rate of mean temperature change can be obtained as follows:
Substituting eq. (5.305) into eq. (5.294), the energy equation becomes:
where
y = r 0 − r is the distance measured from the tube wall. Equation (5.307) is subject to the following two boundary conditions:
(axisymmetric condition)
Integrating eq. (5.307) in the interval of (r0, r) and considering eq. (5.308), we have:
which can be rearranged to
where
Integrating eq. (5.311) in the interval of (0, y) and considering eq. (5.309), one obtains:
If the profiles of axial velocity and the thermal eddy diffusivity are known, eq. (5.313) can be used to obtain the correlation for internal forced convection heat transfer. With the exception of the very thin viscous sublayer, the velocity profile in the most part of the tube is fairly flat. Therefore, it is assumed that the time-averaged velocity, , in eq. (5.312) can be replaced by , and I(y) becomes:
Substituting eqs. (5.314) and (5.306) into eq. (5.313) yields:
which can be rewritten in terms of wall coordinate
where y+ is defined in eq. (5.277). To consider heat transfer in an internal turbulent flow, the entire turbulent boundary layer is divided into three regions: (1) inner region (
y + < 5), (2) buffer region (), and (3) outer region ( y + > 30). In the inner region and eq. (5.316) becomes
Since the inner region is very thin, and 1 −
y / r 0 is effectively equal to 1. Therefore, the temperature profile in the inner region becomes:
The temperature at the boundary between the inner and buffer regions (
y + = 5), , can be obtained from eq. (5.318) as
In the buffer region where , the eddy diffusivity in the buffer region is:
Substituting eq. (5.320) into eq. (5.316) and assuming the turbulent Prandtl number , the following expression is obtained:
Since the buffer region is also very thin, 1 −
y / r 0 in eq. (5.321) is effectively equal to 1. Defining and Integrating eq. (5.321) yields
i.e.,
The temperature at the top of the buffer region where
y + = 30, , becomes
For the outer region where , eq. (5.316) becomes
where the turbulent Prandtl number is assumed to be equal to 1. It is assumed that the Nikuradse equation (5.276) is valid in the outer region and the velocity gradient in this region becomes:
The expression of apparent shear stress in this region, eq. (5.280) , can be non-dimensionalized using eqs. (5.277) and (5.278) as:
Substituting eqs. (5.275) and (5.326) into eq. (5.327), the eddy diffusivity in the outer region is obtained as:
Substituting eq. (5.328) into eq. (5.325), the temperature distribution in this region becomes:
which is valid from
y + = 30 to the center of the tube where y = c r 0or
The temperature at the center of the tube, , can be obtained by letting
in eq. (5.329), i.e.
The overall temperature change from the wall to the center of the tube can be obtained by adding eqs. (5.319), (5.324) and (5.331):
It follows from the definition of friction factor, eq. (5.282), that
Substituting eq. (5.333) into eq. (5.332) and considering the definition of Reynolds number, , eq. (5.332) becomes:
(5.334) In order to obtain the heat transfer coefficient, , the temperature difference must be obtained. If the velocity profile can be approximated by eq. (5.287), and the temperature and velocity can also be approximated by the one-seventh law, i.e.,
it follows that
Substituting eq. (5.334) into eq. (5.336) results in:
which can be rearranged to the following empirical correlation
which can be used together with appropriate friction coefficient discussed in the previous subsection to obtain the Nusselt number. |
There are many articles like this one Testing the isotropy of the Universe by using the JLA compilation of type-Ia supernovae (PDF, arxiv.org) trying to search for a dipole effect in cosmology with type Ia supernovae (used as standard candles). The idea of this kind of search is to do a test of the cosmological principle on a phenomenological approach independantly of some given models (e.g. Tolman-Bondi anistropic model).
They consider a dipole effect on the distance modulus $ \mu $: $$ \mu \leftarrow \mu \times \left(1+A_D (\hat{\textbf{n}}\cdot \hat{\textbf{p}})\right)$$ where $\hat{\textbf{n}}$ is unitary vector pointing in the direction $(l,b)$ of the dipole with an amplitude of $A_D$ and $\hat{\textbf{p}}$ points the direction of each Type Ia supernova. The distance modulus is related to luminosity distance $d_l$ as $ \mu =5 \log \left(\frac{d_l}{10 pc}\right)$.
There is something that bothers me a bit:
Our peculiar velocity to the universe is usually estimated with the CMB dipole measuments, right? And so the reshifts of SNIa are computed in the CMB frame using this correction.
So, supposing we have that kind of cosmological ansisotropy, how it is not mistakenly corrected with the dipole anisotropy of the CMB (and could also be why all searches of this kind give us an amplitude of anisotropy compatible to 0)?
I mean, the CMB dipole implies a velocity of $369.5\pm3.0$ km/s in the direction of $l=264.4^{\circ}\pm0.3^{\circ}$ and $b= 48.4^{\circ}\pm0,5^{\circ}$ according to COBE measurments. But what if the CMB dipole is not just due to our peculiar velocity but also to an unknown "cosmological effect"? This effect would be corrected and nothing will be see when try to make a dipole fit on SNIa measurement, right? |
This is because cosmic strings are not gravitating matter in the usual sense but space-time defects. You will in fact obtain a flat space-time for every $\sigma = k/4, k \in \mathbb{Z}$ but in the coordinates you use this will be every time Minkowski but in a different set of coordinates.
You can see this by the canonical construction of cosmic strings as can be found e.g. in Griffiths & Podolský: Consider Minkowski in cylindrical coordinates: $$ds^2 = - dt^2 + d\rho^2 + dz^2 + \rho^2 d \varphi^2$$ Now introduce a defect by glueing $\varphi=0$ to $\varphi=2 \pi(1- \delta)$ instead of $2 \pi$. Now rescale $\phi = (1-\delta) \varphi$ and your metric will be $$ds^2 = - dt^2 + d\rho^2 + dz^2 + (1-\delta)^2 \rho^2 d \varphi^2$$ The parameter $\delta$ can also be linked with the matter density $\lambda$ of the defect interpreted as a massive string, $\lambda=\delta/4$. However, once you circulate from $\delta=0$ to $\delta=1$, you should start again identifying $0$ to $2 \pi$ and get Minkowski again, which is not reflected in the construction of the coordinate $\phi$ given above. On the other hand, the metric in Weyl coordinates as you introduce it seems to be satisfactory in this respect, albeit at the cost of a re-covering of Minkowski at every loop.
(Do note that if you obtained this formal solution by putting a $\delta$-peak of matter on the $\rho=0$ axis in Weyl coordinates, you have righteously unleashed the exact-solution horrors upon yourself as the metric singularity you produce can erase the matter itself!
*Evil laughter with that reverb suggesting it is coming from hell.*) |
Uplifting cardinals Uplifting cardinals were introduced by Hamkins and Johnstone in [1], from which some of this text is adapted.
An inaccessible cardinal $\kappa$ is
uplifting if and only if for every ordinal $\theta$ it is $\theta$-uplifting, meaning that there is an inaccessible $\gamma>\theta$ such that $V_\kappa\prec V_\gamma$ is a proper elementary extension.
An inaccessible cardinal is
pseudo uplifting if and only if for every ordinal $\theta$ it is pseudo $\theta$-uplifting, meaning that there is a cardinal $\gamma>\theta$ such that $V_\kappa\prec V_\gamma$ is a proper elementary extension, without insisting that $\gamma$ is inaccessible.
Being
strongly uplifting (see further) is boldface variant of being uplifting.
It is an elementary exercise to see that if $V_\kappa\prec V_\gamma$ is a proper elementary extension, then $\kappa$ and hence also $\gamma$ are $\beth$-fixed points, and so $V_\kappa=H_\kappa$ and $V_\gamma=H_\gamma$. It follows that a cardinal $\kappa$ is uplifting if and only if it is regular and there are arbitrarily large regular cardinals $\gamma$ such that $H_\kappa\prec H_\gamma$. It is also easy to see that every uplifting cardinal $\kappa$ is uplifting in $L$, with the same targets. Namely, if $V_\kappa\prec V_\gamma$, then we may simply restrict to the constructible sets to obtain $V_\kappa^L=L^{V_\kappa}\prec L^{V_\gamma}=V_\gamma^L$. An analogous result holds for pseudo-uplifting cardinals.
Contents 1 Consistency strength of uplifting cardinals 2 Uplifting cardinals and $\Sigma_3$-reflection 3 Uplifting Laver functions 4 Connection with the resurrection axioms 5 Strongly Uplifting 6 Weakly superstrong cardinal 7 References Consistency strength of uplifting cardinals Theorem.
1. If $\delta$ is a Mahlo cardinal, then $V_\delta$ has a proper class of uplifting cardinals.
2. Every uplifting cardinal is pseudo uplifting and a limit of pseudo uplifting cardinals.
3. If there is a pseudo uplifting cardinal, or indeed, merely a pseudo $0$-uplifting cardinal, then there is a transitive set model of ZFC with a reflecting cardinal and consequently also a transitive model of ZFC plus Ord is Mahlo.
Proof. For (1), suppose that $\delta$ is a Mahlo cardinal. By the Lowenheim-Skolem theorem, there is a club set $C\subset\delta$ of cardinals $\beta$ with $V_\beta\prec V_\delta$. Since $\delta$ is Mahlo, the club $C$ contains unboundedly many inaccessible cardinals. If $\kappa<\gamma$ are both in $C$, then $V_\kappa\prec V_\gamma$, as desired. Similarly, for (2), if $\kappa$ is uplifting, then $\kappa$ is pseudo uplifting and if $V_\kappa\prec V_\gamma$ with $\gamma$ inaccessible, then there are unboundedly many ordinals $\beta<\gamma$ with $V_\beta\prec V_\gamma$ and hence $V_\kappa\prec V_\beta$. So $\kappa$ is pseudo uplifting in $V_\gamma$. From this, it follows that there must be unboundedly many pseudo uplifting cardinals below $\kappa$. For (3), if $\kappa$ is inaccessible and $V_\kappa\prec V_\gamma$, then $V_\gamma$ is a transitive set model of ZFC in which $\kappa$ is reflecting, and it is thus also a model of Ord is Mahlo. QED
Uplifting cardinals and $\Sigma_3$-reflection Every uplifting cardinal is a limit of $\Sigma_3$-reflecting cardinals, and is itself $\Sigma_3$-reflecting. If $\kappa$ is the least uplifting cardinal, then $\kappa$ is not $\Sigma_4$-reflecting, and there are no $\Sigma_4$-reflecting cardinals below $\kappa$.
The analogous observation for pseudo uplifting cardinals holds as well, namely, every pseudo uplifting cardinal is $\Sigma_3$-reflecting and a limit of $\Sigma_3$-reflecting cardinals; and if $\kappa$ is the least pseudo uplifting cardinal, then $\kappa$ is not $\Sigma_4$-reflecting, and there are no $\Sigma_4$-reflecting cardinals below $\kappa$.
Uplifting Laver functions
Every uplifting cardinal admits an ordinal-anticipating Laver function, and indeed, a HOD-anticipating Laver function, a function $\ell:\kappa\to V_\kappa$, definable in $V_\kappa$, such that for any set $x\in\text{HOD}$ and $\theta$, there is an inaccessible cardinal $\gamma$ above $\theta$ such that $V_\kappa\prec V_\gamma$, for which $\ell^*(\kappa)=x$, where $\ell^*$ is the corresponding function defined in $V_\gamma$.
Connection with the resurrection axioms
Many instances of the (weak) resurrection axiom imply that ${\frak c}^V$ is an uplifting cardinal in $L$:
RA(all) implies that ${\frak c}^V$ is uplifting in $L$. RA(ccc) implies that ${\frak c}^V$ is uplifting in $L$. wRA(countably closed)+$\neg$CH implies that ${\frak c}^V$ is uplifting in $L$. Under $\neg$CH, the weak resurrection axioms for the classes of axiom-A forcing, proper forcing, semi-proper forcing, and posets that preserve stationary subsets of $\omega_1$, respectively, each imply that ${\frak c}^V$ is uplifting in $L$.
Conversely, if $\kappa$ is uplifting, then various resurrection axioms hold in a corresponding lottery-iteration forcing extension.
Theorem. (Hamkins and Johnstone) The following theories are equiconsistent over ZFC: There is an uplifting cardinal. RA(all) RA(ccc) RA(semiproper)+$\neg$CH RA(proper)+$\neg$CH for some countable ordinal $\alpha$, RA($\alpha$-proper)+$\neg$CH RA(axiom-A)+$\neg$CH wRA(semiproper)+$\neg$CH wRA(proper)+$\neg$CH for some countable ordinal $\alpha$, wRA($\alpha$-proper})+$\neg$CH wRA(axiom-A)+$\neg$CH wRA(countably closed)+$\neg$CH Strongly Uplifting
(Information in this section comes from [2])
Strongly uplifting cardinals are precisely strongly pseudo uplifting ordinals, strongly uplifting cardinals with weakly compact targets, superstrongly unfoldable cardinals and almost-hugely unfoldable cardinals.
Definitions
An ordinal is
strongly pseudo uplifting iff for every ordinal $θ$ it is strongly $θ$-uplifting, meaning that for every $A⊆V_κ$, there exists some ordinal $λ>θ$ and an $A^*⊆V_λ$ such that $(V_κ;∈,A)≺(V_λ;∈,A^*)$ is a proper elementary extension.
An inaccessible cardinal is
strongly uplifting iff for every ordinal $θ$ it is strongly $θ$-uplifting, meaning that for every $A⊆V_κ$, there exists some inaccessible(*) $λ>θ$ and an $A^*⊆V_λ$ such that $(V_κ;∈,A)≺(V_λ;∈,A^*)$ is a proper elementary extension. By replacing starred "inaccessible" with "weakly compact" and other properties, we get strongly uplifting with weakly compact etc. targets.
A cardinal $\kappa$ is
$\theta$-superstrongly unfoldable iff for every $A\subseteq\kappa$, there is some transitive $M$ with $A\in M\models\text{ZFC}$ and some $j:M\rightarrow N$ an elementary embedding with critical point $\kappa$ such that $j(\kappa)\geq\theta$ and $V_{j(\kappa)}\subseteq N$.
A cardinal $\kappa$ is
$\theta$-almost-hugely unfoldable iff for every $A\subseteq\kappa$, there is some transitive $M$ with $A\in M\models\text{ZFC}$ and some $j:M\rightarrow N$ an elementary embedding with critical point $\kappa$ such that $j(\kappa)\geq\theta$ and $N^{<j(\kappa)}\subseteq N$.
$κ$ is then called
superstrongly unfoldable (resp. almost-hugely unfoldable) iff it is $θ$-strongly unfoldable (resp. $θ$-almost-hugely unfoldable) for every $θ$; i.e. the target of the embedding can be made arbitrarily large. Equivalence
For any ordinals $κ$, $θ$, the following are equivalent:
$κ$ is strongly pseudo $(θ+1)$-uplifting. $κ$ is strongly $(θ+1)$-uplifting. $κ$ is strongly $(θ+1)$-uplifting with weakly compact targets. $κ$ is strongly $(θ+1)$-uplifting with totally indescribable targets, and indeed with targets having any property of $κ$ that is absolute to all models $V_γ$ with $γ > κ, θ$.
For any cardinal $κ$ and ordinal $θ$, the following are equivalent:
$κ$ is strongly $(θ+1)$-uplifting. $κ$ is superstrongly $(θ+1)$-unfoldable. $κ$ is almost-hugely $(θ+1)$-unfoldable. For every set $A ∈ H_{κ^+}$ there is a $κ$-model $M⊨\mathrm{ZFC}$ with $A∈M$ and $V_κ≺M$ and a transitive set $N$ with an elementary embedding $j:M→N$ having critical point $κ$ with $j(κ)> θ$ and $V_{j(κ)}≺N$, such that $N^{<j(κ)}⊆N$ and $j(κ)$ is inaccessible, weakly compact and more in $V$. $κ^{<κ}=κ$ holds, and for every $κ$-model $M$ there is an elementary embedding $j:M→N$ having critical point $κ$ with $j(κ)> θ$ and $V_{j(κ)}⊆N$, such that $N^{<j(κ)}⊆N$ and $j(κ)$ is inaccessible, weakly compact and more in $V$. Relations to other cardinals If $δ$ is a subtle cardinal, then the set of cardinals $κ$ below $δ$ that are strongly uplifting in $V_δ$ is stationary. If $0^♯$ exists, then every Silver indiscernible is strongly uplifting in $L$. In $L$, $κ$ is strongly uplifting iff it is unfoldable with cardinal targets. Every strongly uplifting cardinal is strongly uplifting in $L$. Every strongly $θ$-uplifting cardinal is strongly $θ$-uplifting in $L$. Every strongly uplifting cardinal is strongly unfoldable of every ordinal degree $α$ and a stationary limit of cardinals that are strongly unfoldable of every ordinal degree and so on. Relation to boldface resurrection axiom
The following theories are equiconsistent over $\mathrm{ZFC}$:
There is a strongly uplifting cardinal. The boldface resurrection axiom for all forcing, for proper forcing, for semi-proper forcing and for c.c.c. forcing. The weak boldface resurrection axioms for countably-closed forcing, for axiom-$A$ forcing, for proper forcing and for semi-properforcing, respectively, plus $¬\mathrm{CH}$. Weakly superstrong cardinal
(Information in this section comes from [3])
Hamkins and Johnstone called an inaccessible cardinal $κ$
weakly superstrong if for every transitive set $M$ of size $κ$ with $κ∈M$ and $M^{<κ}⊆M$, a transitive set $N$ and an elementary embedding $j:M→N$ with critical point $κ$, for which $V_{j(κ)}⊆N$, exist.
It is called
weakly almost huge if for every such $M$ there is such $j:M→N$ for which $N^{<j(κ)}⊆N$.
(As usual one can call $j(κ)$ the target.)
A cardinal is superstrongly unfoldable if it is weakly superstrong with arbitrarily large targets, and it is almost hugely unfoldable if it is weakly almost huge with arbitrarily large targets.
If $κ$ is weakly superstrong, it is $0$-extendible and $\Sigma_3$-extendible. Weakly almost huge cardinals also are $\Sigma_3$-extendible. Because $\Sigma_3$-extendibility always can be destroyed, all these cardinal properties (among others) are never Lever indestructible.
References Hamkins, Joel David and Johnstone, Thomas A. Resurrection axioms and uplifting cardinals., 2014. www arχiv bibtex Hamkins, Joel David and Johnstone, Thomas A. Strongly uplifting cardinals and the boldface resurrection axioms., 2014. arχiv bibtex Bagaria, Joan and Hamkins, Joel David and Tsaprounis, Konstantinos and Usuba, Toshimichi. Superstrong and other large cardinals are never Laver indestructible.Archive for Mathematical Logic 55(1-2):19--35, 2013. www arχiv DOI bibtex |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Symbols:Greek/Rho Rho
The $17$th letter of the Greek alphabet.
Minuscules: $\rho$ and $\varrho$ Majuscule: $\Rho$
The $\LaTeX$ code for \(\rho\) is
\rho .
The $\LaTeX$ code for \(\varrho\) is
\varrho .
The $\LaTeX$ code for \(\Rho\) is
\Rho .
$\rho$ $\rho = \dfrac m V$
where:
$\rho_A$ $\rho_A = \dfrac m A$
where:
The $\LaTeX$ code for \(\rho_A\) is
\rho_A .
$\rho_a$ Let $\struct {S, \circ}$ be an algebraic structure. The mapping $\rho_a: S \to S$ is defined as: $\forall x \in S: \map {\rho_a} x = x \circ a$ This is known as the right regular representation of $\struct {S, \circ}$ with respect to $a$. The $\LaTeX$ code for \(\map {\rho_a} x\) is
\map {\rho_a} x .
$\rho$ $\rho = \dfrac 1 {\left\lvert{k}\right\rvert}$ |
Analysis of Rugates
\(n(x)=n_a+0.5 n_p A(x) \sin\left(\displaystyle\frac{4\pi x}{\lambda_0}\right)\)
\(A(x)=10 t^3-15 t^4+6 t^5 \)
\(t=\displaystyle\frac{2x}{TOT},\;\;\mbox{ if}\;\; x<\frac{TOT}{2} \)
\(t=\displaystyle\frac{2(TOT-x)}{TOT},\;\;\mbox{ if}\;\; x>\frac{TOT}{2}\)
Here
is total optical thickness. TOT
Design of Rugates
Rugate filters are optical coatings with the refractive index which is varied continuously.
In comparison with conventional multilayer structures, rugate filters provide some special potentials:
Example: designed beam splitter with the help of rugate filter design option.
For a theoretical analysis, a rugate coating structure can be approximated by a refractive index profile with a larger number of steps. Therefore, design of rugates requires an efficient method to specify the structure, calculate spectral characteristics and optimize the structures.
OptiLayer allows you fast and effective designing rugate filters.
The details of our expertise in rugate coatings analysis and design has been published: |
ISSN:
1930-5346
eISSN:
1930-5338
All Issues
Advances in Mathematics of Communications
May 2014 , Volume 8 , Issue 2
Select all articles
Export/Reference:
Abstract:
A problem of improving the accuracy of nonparametric entropy estimation for a stationary ergodic process is considered. New weak metrics are introduced and relations between metrics, measures, and entropy are discussed. A new nonparametric entropy estimator is constructed based on weak metrics and has a parameter with which the estimator is optimized to reduce its bias. It is shown that estimator's variance is upper-bounded by a nearly optimal Cramér-Rao lower bound.
Abstract:
This paper gives lower and upper bounds on the covering radius of codes over $\mathbb{Z}_{2^s}$ with respect to homogenous distance. We also determine the covering radius of various Repetition codes, Simplex codes (Type $\alpha$ and Type $\beta$) and their dual and give bounds on the covering radii for MacDonald codes of both types over $\mathbb{Z}_4$.
Abstract:
A set of quasi-uniform random variables $X_1,\ldots,X_n$ may be generated from a finite group $G$ and $n$ of its subgroups, with the corresponding entropic vector depending on the subgroup structure of $G$. It is known that the set of entropic vectors obtained by considering arbitrary finite groups is much richer than the one provided just by abelian groups. In this paper, we start to investigate in more detail different families of non-abelian groups with respect to the entropic vectors they yield. In particular, we address the question of whether a given non-abelian group $G$ and some fixed subgroups $G_1,\ldots,G_n$ end up giving the same entropic vector as some abelian group $A$ with subgroups $A_1,\ldots,A_n$, in which case we say that $(A, A_1, \ldots, A_n)$ represents $(G, G_1, \ldots, G_n)$. If for any choice of subgroups $G_1,\ldots,G_n$, there exists some abelian group $A$ which represents $G$, we refer to $G$ as being abelian (group) representable for $n$. We completely characterize dihedral, quasi-dihedral and dicyclic groups with respect to their abelian representability, as well as the case when $n=2$, for which we show a group is abelian representable if and only if it is nilpotent. This problem is motivated by understanding non-linear coding strategies for network coding, and network information theory capacity regions.
Abstract:
To resist Binary Decision Diagrams (BDD) based attacks, a Boolean function should have a high BDD size. The hidden weighted bit function (HWBF), introduced by Bryant in 1991, seems to be the simplest function with exponential BDD size. In [28], Wang et al. investigated the cryptographic properties of the HWBF and found that it is a very good candidate for being used in real ciphers. In this paper, we modify the HWBF and construct two classes of functions with very good cryptographic properties (better than the HWBF). The new functions are balanced, with almost optimum algebraic degree and satisfy the strict avalanche criterion. Their nonlinearity is higher than that of the HWBF. We investigate their algebraic immunity, BDD size and their resistance against fast algebraic attacks, which seem to be better than those of the HWBF too. The new functions are simple, can be implemented efficiently, have high BDD sizes and rather good cryptographic properties. Therefore, they might be excellent candidates for constructions of real-life ciphers.
Abstract:
In this paper, we focus on the design of unitary space-time codes achieving full diversity using division algebras, and on the systematic computation of their minimum determinant. We also give examples of such codes with high minimum determinant. Division algebras allow to obtain higher rates than known constructions based on finite groups.
Abstract:
The values of the homogeneous weight are determined for finite Frobenius rings that are a direct product of local Frobenius rings. This is used to investigate the partition induced by this weight and its dual partition under character-theoretic dualization. A characterization is given of those rings for which the induced partition is reflexive or even self-dual.
Abstract:
A binary sequence family ${\mathcal S}$ of length $n$ and size $M$ can be characterized by the maximum magnitude of its nontrivial aperiodic correlation, denoted as $\theta_{\max} ({\mathcal S})$. The lower bound on $\theta_{\max} ({\mathcal S})$ was originally presented by Welch, and improved later by Levenshtein. In this paper, a Fourier transform approach is introduced in an attempt to improve the Levenshtein's lower bound. Through the approach, a new expression of the Levenshtein bound is developed. Along with numerical supports, it is found that $\theta_{\max} ^2 ({\mathcal S}) > 0.3584 n-0.0810$ for $M=3$ and $n \ge 4$, and $\theta_{\max} ^2 ({\mathcal S}) > 0.4401 n-0.1053$ for $M=4$ and $n \ge 4$, respectively, which are tighter than the original Welch and Levenshtein bounds.
Abstract:
In the present article we propose a reduction point algorithm for any Fuchsian group in the absence of parabolic transformations. We extend to this setting classical algorithms for Fuchsian groups with parabolic transformations, such as the
flip flopalgorithm known for the modular group $\mathbf{SL}(2, \mathbb{Z})$ and whose roots go back to [9]. The research has been partially motivated by the need to design more efficient codes for wireless transmission data and for the study of Maass waveforms under a computational point of view. Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
I was reading about energy usage in batteries and don't quite understand why it is measured in different units than home electrical usage. An ampere-hour does not include a measure of volts. But my understanding though is that a battery has a constant voltage (1.5V, 9V, ...) just as much as home electrical usage (120V, 220V, ...). So I don't see why they have different units by which they are measured.
\$kW \cdot h\$ are a measure of energy, for which grid customers are billed and usually shows up on your invoice in easily understood numbers (0-1000, not 0-1 or very large numbers; ranges which, unfortunately, confuse many people).
\$A \cdot h\$ are a measure of electrical charge. A battery (or capacitor) can store more or less a certain amount of charge regardless of its operating conditions, whereas its output energy can change. If the voltage curve for a battery in certain operating conditions are known (circuit, temperature, lifetime), then its output energy is also known, but not otherwise, though you can come up with some pretty good estimates.
To convert from \$A \cdot h\$ to \$kW \cdot h\$ for a constant voltage source, multiply by that voltage; for a changing voltage and/or current source, integrate over time: $$ \frac{1 kW\cdot h}{1000 W\cdot h}\int_{t_1}^{t_2} \! I(t)E(t)dt ~;~~E~[V],~I~[A],~{t_{1,2}}~[h]$$
A note about battery voltage: Rated battery voltage is "nominal". A fully charged 12 volt lead acid battery actually starts out around ~14.4 volts and drops off as you draw energy from it. The actual battery voltage depends on a number of factors not limited to state of charge, battery age, load profile, chemistry, etc,... For instance, A lithium ion battery of 3.7V (nominal) may start out at 4.15 volts and diminish to ~2.7 volts before requiring recharge.
Watt-Hours (or kW-H) is an indicator of the energy storage capacity of the battery, whereas amp-hours would refer to how many amps minimum you can draw from a battery at full charge for an hour before it was no longer capable of providing that level of flow (perhaps at or above the rated voltage?). They are closely related, but not equivalent. Some batteries are designed more for high current draw devices, whereas others are designed to last a long time for lower current draw devices.
Appended: Now that I look at my cell phone battery, I notice that it has all three ratings printed on it. It is a Lithium-Ion battery whose nominal voltage rating is 3.7V. It's energy capacity is marked as 4.81 Watt-Hours. It's electric charge rating is 1300 milliAmp-Hours. This seems to indicate that Energy = Voltage * Electric Charge (at least in terms of the battery ratings), though I think that this equation is hiding the fact that there is an integration of P=VI going on and that V is more like an average value than a constant, which probably gives a pretty good approximation.
The way a battery works, the total coulombs it can push around falls out more directly than the total energy it can store. The voltage is not constant. It varies by state of charge for one, and the relationship between the two can be quite different between battery chemistries. All this is to say that A-h is more relevant to battery manufacturers than W-h or Joules.
Joules can of course be relevant to circuit design, so this information is available, just not included in the 2 second sound bite called the Amp-hour rating. Battery datasheets can get quite complex. As with most things, there are a host of tradeoffs and thorough information is more than a single number. If you do have to pick just two numbers to quickly characterize a battery, Volts and Amp-hours are as good as any, and are what the industry has converged on.
A batteries voltage changes over its lifetime. The current is set by the circuit it is connected to.
As the current is a constant known value and can be predicted, and the voltage cannot, the units are in the value that can be predicted.
Your electricity supply is a constant voltage and can be predicted.
One factor not yet mentioned is that because batteries have a certain amount of internal resistance, drawing more current will cause the voltage to sag. Suppose, hypothetically, that a particular battery that's been discharged a certain amount will supply 12 volts when supplying 10mA, or 10 volts when supplying 100mA. Drawing 10mA from the battery for 10 hours will discharge it about as much as drawing 100mA for an hour, but in the former scenario the battery would have supplied 20% more "useful" energy. Key point: a larger fraction of the energy in a battery will be lost when trying to drain it quickly than when trying to drain it slowly.
Power lines also have a certain level of resistance, and similar factors may apply, but the line voltage reaching a residental customer's meter is generally not appreciably affected by that customer's usage. A power company could supply one amp at 105 volts using 20% less energy (per unit time) than would be required to supply one amp at 126 volts. If customers were billed per amp-hour, the power companies would have an incentive to supply their energy at the lowest possible voltage. Billing per kWh means the customer's billable usage will be proportional to the amount of energy the power company has to generate to supply it. Incidentally, some devices (e.g. induction motors) will often draw less current at higher voltages (while doing the same amount of work), while other devices like incandescent lamps and heaters will draw more current at higher voltages (while producing substantially more light and heat).
Simply an "Amp.Hour" is not a scientific unit or SI unit. Amp.hr is a rating that battery manufacturers use but because one ampre = one Coulomb per second, when multiplied by one hour the two time factors cancel out and the result is simply 1 Amp.Hr = 3600 Coulombs of charge, no time factor involved. So a bit of smoke and mirrors from the battery manufacturers. If you want to really know how your battery is going to perform you will have to look a little deeper than taking the word of the sales people ... ! |
I have real-valued functions $\{f_n\},f$ on a subset $X\subset \mathbb R^n$ that are equicontinuous and I have Borel measures $\{\mu_n\},\mu$. I have that
For each fixed $m$, $\int f_m d\mu_n\to\int f_md\mu$ and $\int f d\mu_n\to\int fd\mu$. $f_n\to f$ pointwise (so also uniformly since equicontinuous) $|f_n|\leq g\forall n$, $|f|\leq g$ and $\int g d\mu_n<\infty\forall n$, $\int g d\mu<\infty$,$\int gd\mu_n\to\int gd\mu$.
The question is whether $\int f_nd\mu_n\to\int f d\mu$?
So for example if we take away equicontinuity then $f_n(x)=x^n$ on $[0,1]$ and $\mu_n=(\text{point mass at $1-1/n$})$ makes this break (LHS $\to1/e$, RHS=1 in the above question). But it's not equicontinuous. But the thing I worry about is that the integration doesn't have anything to do with the topology on $X$ so equicontinuity is a moot point. But I'm not sure about this.
So can you prove $\int f_nd\mu_n\to\int f d\mu$ given the assumptions including equicontinuity?
If it doesn't work, what sort of extra lax condition might fix it? (Without touching the convergence modes. I.e., without asking for the measures to converge in total variation, which would immediately fix it. Only stuff like $f_n$ have to be extra continuous somehow or $X$ has to be a product of intervals in $R^n$ etc.) |
Complete Differentiable Semiclassical Spectral Asymptotics Chapter First Online: 14 September 2019 Abstract
For an operator
\(A:=A_h= A^0(hD) + V(x, hD)\) with a “potential” V decaying as \(|x|\rightarrow \infty \) we establish under certain assumptions the complete and differentiable with respect to \(\tau \) asymptotics of \(e_h(x, x,\tau )\) where \(e_h(x, y,\tau )\) is the Schwartz kernel of the spectral projector. Key words and phrases Microlocal Analysis differentiable complete spectral asymptotics
This research was supported in part by National Science and Engineering Research Council (Canada) Discovery Grant RGPIN 13827.
Bibliography
[DG]
J. J. Duistermaat, V. W. Guillemin.
The spectrum of positive elliptic operators and periodic bicharacteristics
. Invent. Math., 29(1):39–79 (1975).
MathSciNet CrossRef Google Scholar
[Ivr1]
V. Ivrii,
Microlocal Analysis, Sharp Spectral, Asymptotics and Applications
.
Google Scholar
[Ivr2]
V. Ivrii.
100 years of Weyl’s law
, Bull. Math. Sci., 6(3):379–452 (2016).
MathSciNet CrossRef Google Scholar
[Ivr3]
V. Ivrii.
Complete semiclassical spectral asymptotics for periodic and almost periodic perturbations of constant operators
,
arXiv:1808.01619
and in this book, 24pp.
[PP]
V. Petkov, G. Popov.
Asymptotic behavior of the scattering phase for non-trapping obstacles
. Ann. Inst. Fourier, 32:114–149 (1982).
MathSciNet CrossRef Google Scholar Copyright information
© Springer Nature Switzerland AG 2019 |
I have a table called
AssignmentMarks that stores Student’s assignment marks for different subjects. This table has a column called
Marks varchar(10) which stores the marks. The marks can be like
1,
2, and also it can be
+,
- or
-+ which represents each sign a specific marks.
Note:
+ is
1 mark,
- is
-1 and
-+ is
0.5
While getting a student marks from the table I am facing the
Arithmetic Overflow error which I don’t know what is the cause of this error.
The query is as follow:
SELECT SUM( CAST( CASE WHEN a.Marks IS NULL OR a.Marks = '' THEN 0 WHEN a.Marks = '+' THEN 1 WHEN a.Marks = '-' THEN -1 WHEN a.Marks = '-+' THEN 0.5 ELSE a.Marks END AS DECIMAL(5,2) ) ) FROM AssignmentMarks AS a WHERE a.StudentID=10 AND a.SubjectID=1
After Executing the above query I get the following error.
Msg 8115, Level 16, State 8, Line 7
Arithmetic overflow error converting varchar to data type numeric.
Any idea what is the main cause of this error?
I want to build this infinite continued fraction
$ $ F_{n}(x)= \frac{1}{1-x\frac{(n+1)^2}{4(n+1)^2-1}F_{n+1}(x)} $ $
which gives for $ n=0$
$ $ F_{0}(x)=\dfrac{1}{1-\dfrac{(1/3)x}{1-\dfrac{(4/15)x}{1-\dfrac{(9/35)x}{1-\ddots}}}}$ $ I took inspiration from this Post (@Michael E2), the problem is that when I transform it as a list representation
{b0,{a1, b1},{a2, b2},...} Clear[F2,iF2]; iF2[0]=0; iF2[1]={1,1}; iF2[2]={-x/3,1}; iF2[n_]:={-x(n+1)^2/(4(n+1)^2-1),1}; F2[n_]:=Table[iF2[k],{k,0,n}];
I can’t find all the terms, so I find for 5 terms
Block[{n=5},F2[n]] (*{0,{1,1},{-x/3,1},{-16x/63,1},{-25x/99,1},{-36x/143,1}}*)
it lacks after {$ -x/3,1$ } the terms {$ -4x/15,1$ } and {$ -9x/35,1$ }
What is wrong please?
I’ve asked that question before on History of Science and Mathematics but haven’t received an answer
Does someone have a reference or further explanation on Gauß’ entry from May 24, 1796 in his mathematical diary (Mathematisches Tagebuch, full scan available via https://gdz.sub.uni-goettingen.de/id/DE-611-HS-3382323) on page 3 regarding the divergent series $ $ 1-2+8-64…$ $ in relation to the continued fraction $ $ \frac{1}{1+\frac{2}{1+\frac{2}{1+\frac{8}{1+\frac{12}{1+\frac{32}{1+\frac{56}{1+128}}}}}}}$ $
He states also – if I read it correctly –
Transformatio seriei which could mean series transformation, but I don’t see how he transforms from the series to the continued fraction resp. which transformation or rule he applied.
The OEIS has an entry (https://oeis.org/A014236) for the sequence $ 2,2,8,12,32,56,128$ , but I don’t see the connection either.
My question: Can anyone help or clarify the relationship that Gauss’ used?
Torsten Schoeneberg remarked rightfully in the original question that the term in the series are $ (-1)^n\cdot 2^{\frac{1}{2}n(n+1)}$ and Gerald Edgar conjectures it might be related to Gauss’ Continued Fraction.
The error I get from the calculation
When I try to calculate the square root of a fraction using a calculator, I get a mathematical error. (see link to image) Why is this an apparently invalid operation?
The
Page Table should have all virtual page number which are in its logical address space, Why it’s the case? Is it because we want to access Page Table entry fast just like an array where key is virtual page number i.e. constant time?
Or
Is it due to structure of the process? (I mean Our program uses whole logical space; In general at address 0 we have Code and at address Max we have stack which is variable. Which means can point to any address of logical address space)
An Egyptian fraction was written as a sum of unit fractions, meaning the numerator is always 1; further, no two denominators can be the same. As easy way to create an Egyptian fraction is to repeatedly take the largest unit fraction that will fit, subtract to find what remains, and repeat until the remainder is a unit fraction, for example:
7 divided by 15 is less than 1/2 but more than 1/3, so the first unit fraction is 1/3 and the first remainder is 2/15.
Then 2/15 is less than 1/7 but more than 1/8, so the second unit fraction is 1/8 and the second remainder is 1/120.
That’s in unit form, so we are finished: 7 ÷ 15 = 1/3 + 1/8 + 1/120
I’m trying to solve the egyptian fraction problem where I’m using the below greedy method:
def egyptianFraction(nr, dr): print("The Egyptian Fraction " + "Representation of {0}/{1} is". format(nr, dr), end="\n") # empty list ef to store # denominator ef = [] # while loop runs until # fraction becomes 0 i.e, # numerator becomes 0 while nr != 0: # taking ceiling x = math.ceil(dr / nr) # storing value in ef list ef.append(x) # updating new nr and dr nr = x * nr - dr dr = dr * x # printing the values for i in range(len(ef)): if i != len(ef) - 1: print(" 1/{0} +" . format(ef[i]), end = " ") else: print(" 1/{0}" . format(ef[i]), end = " ") egyptianFraction(6, 14)
I need to build an algorithm that guarantees a maximum number of terms or a minimum largest denominator; for instance,
5 ÷ 121 = 1/25 + 1/757 + 1/763309 + 1/873960180913 + 1/1527612795642093418846225 but a simpler rendering of the same number is 1/33 + 1/121 + 1/363.
I believe following is a continued fraction. I’m stumped on how to solve for x
$ x = \frac{a}{b} = \frac{b}{a/3}$
I know it can be re-written as
$ x = \frac{a}{b} = \frac{3b}{a}$
$ x = a^2 = 3b^2$
I’m unsure where to go from here.
Below are the choices
$ x=9$
$ x=\frac{1}{3}$
$ x=3$
$ x=\frac{1}{\sqrt3}$
$ x={\sqrt3}$
I get the following values from a machine. I type the hex value in and it sets the machine to a decimal with two places after the point.
Machine setting hex value
0.1 3DCCCCCD
0.11 3DE147AE
0.12 3DF5C28F
0.13 3E051EB8
0.25 3E800000
0.5 3F000000
0.75 3F400000
1 3F800000
2 40000000
I take 0.31 and I work it out to 0.4F5C28F6 but the machine says 0.31=3E9EB852 How do I get from 0.4F5C28F6 to 3E9EB852 is there some mask I need to apply?
I am currently using custom formatting to get an improper fraction (###/###), however, I will need the integers to be 7 instead of 7/1. How do I achieve both improper fraction OR integer (if it can be simplified)?
Question no 3 , kindly explain the relation btw lagrange and partial fraction](https://i.stack.imgur.com/EYVIk.jpg)
I am unable to figure out the relation btw lagrange interpolation formula and partial fractions. |
The Friedmann equation expressed in natural units ($\hbar=c=1$) is given by $$\left(\frac{\dot a}{a}\right)^2 = \frac{l_P^2}{3}\rho(t) - \frac{k}{R^2}$$ where $t$ is the proper time measured by a comoving observer, $a(t)$ is the dimensionless scale factor, $l_P=\sqrt{8\pi G\hbar/c^3}$ is the reduced proper Planck length, $\rho(t)$ is the proper mass density, the curvature parameter $k=\{-1,0,1\}$ and $R=R_0a$ is the proper spatial radius of curvature.
Now each quantity in this equation has dimensions of powers of $[\hbox{proper length}]$ therefore it seems reasonable to refer to it as the proper Friedmann equation.
I wish to find the corresponding comoving Friedmann equation that is defined solely in terms of quantities with dimensions of powers of scale-free $[\hbox{comoving length}]$. I want to then solve this equation to find the constant comoving mass density $\rho_0$ from first principles.
In order to achieve this goal I define conformal or scale-free time $\eta$ using $$dt=a\ d\eta$$ so that \begin{eqnarray*} \frac{da}{dt}&=&\frac{da}{d\eta}\frac{d\eta}{dt},\\ \dot{a} &=& \frac{a'}{a}. \end{eqnarray*} The Friedmann equation then becomes $$\left(\frac{a'}{a}\right)^2 = \frac{l_P^2}{3}\rho(\eta) a^2 - \frac{k}{R_0^2}.$$ Now the LHS of the equation and the second term on the RHS have dimensions of $[\hbox{comoving length}]^{-2}$ as required. But the remaining term involving $l_P^2\rho$ still has dimensions of $[\hbox{proper length}]^{-2}$. In order that it has dimensions of $[\hbox{comoving length}]^{-2}$ we need to express the Planck length squared, $l_P^2$, in terms of $[\hbox{comoving length}]^2$ and the mass density in terms of $[\hbox{comoving length}]^{-4}$. In order to express the proper Planck length in terms of $[\hbox{comoving length}]$ we need to divide by the scale factor so that $l_P \rightarrow l_P/a$. Finally, we replace the proper mass density $\rho(\eta)$ with the comoving mass density $\rho_0(\eta)$ which has dimensions $[\hbox{comoving length}]^{-4}$.
Thus the complete comoving Friedmann equation is given by $$\left(\frac{a'}{a}\right)^2 = \frac{l_P^2}{3}\rho_0(\eta) - \frac{k}{R_0^2}.$$ Now each quantity in this equation has dimensions of powers of $[\hbox{comoving length}]$.
Does this dimensional reasoning make sense?
Now let us find a cosmological solution for a Universe with a constant comoving mass density $\rho_0(\eta)=\rho_0$. This Universe would obey the so-called perfect cosmological principle in that it is homogeneous and isotropic in both time and space provided one uses spacetime coordinates that are themselves independent of scale.
I can solve the comoving Friedmann equation for the constant comoving mass density $\rho_0$ by defining a constant time $t_0$ such that $$\left(\frac{a'}{a}\right)^2=\frac{1}{t_0^2}$$ which has the solution $$a(\eta)=e^{\eta/t_0}.$$ By substituting $a(\eta)$ back into the comoving Friedmann equation one finds that the comoving mass density $\rho_0$ is given by $$\rho_0=\frac{3}{l_P^2t_0^2}\left(1+\frac{k t_0^2}{R_0^2}\right).$$ By substituting the $a(\eta)$ expression into $dt=a\ d\eta$ and integrating I find that the scale factor as a function of proper time $t$ takes the simple linear form $$a(t)=\frac{t}{t_0}.$$ By substituting $a(t)$ back into the proper Friedmann equation one finds that the proper mass density $\rho$ is given by $$\rho=\frac{\rho_0}{a^2}.$$ Therefore the comoving Friedmann equation implies a unique functional form for the proper mass density that does not depend on assumptions about the constituents of the Universe. |
According to Coulomb law, the electromagnetic force between point charges separated at zero distance is infinite. Does QED explains the same result, or there is any difference?
The use of 'point particles' to describe the results of classical electromagnetism is questionable if it is assumed the particles really are point-like but have a finite charge. Therefore one must avoid invoking such impossible entities. Introductory textbooks usually do not go into this.
One can treat the whole of classical electromagnetism without ever invoking the notion of a point particle with finite charge. In fact one ought to do this, because the notion of a point particle with finite charge is unphysical. Such an entity would have an electric field around it scaling as $1/r^2$, and consequently an energy density scaling as $1/r^4$; upon integrating this energy density over volume one has a value that goes as $1/r$ so is infinite in the limit $r \rightarrow 0$. This means the entity would have infinite inertia. There have been attempts to argue that one can simply ignore this infinite inertia, but then other problems arise such as self-acceleration of small dipoles. Overall, the classical point particle with finite charge does not make physical sense and no such thing exists in the natural world.
With this in mind, we now have two issues: how is classical electromagnetism done correctly, and what is that nature of particles such as electrons.
Classical electromagnetism is done correctly by insisting that charge density $\rho$ (charge per unit volume) cannot be infinite. If one wishes to treat small physical entities, then one can do so by allowing them to have finite charge density, and then their total charge tends to zero when their physical size does. When one refers to a 'point charge' it can then be taken as understood that what one has in mind is something like a small spherical body with a radius $r$ that is small compared to all significant distances in the problem under discussion, but not zero, and not so small that the energy of the particle's own field exceeds the particle's rest energy. The latter distance can be estimated from $$ \frac{q^2}{4\pi\epsilon_0 r} \le m c^2 $$ where $m$ is the rest mass. If a small entity has mass $m$ and radius $r$ then its charge must be less than the limit set by this equation. If a small entity has mass $m$ and charge $q$ then its radius must be larger than the limit set by this equation.
When a charged body is accelerated by the application of a force, the forces associated with different parts of the body exerting forces on one another via their fields create a net contribution called self-force which complicates the analysis. This is also called radiation reaction. It is negligible at ordinary accelerations, but just noticeable in modern particle accelerators.
Finally, what about electrons and things like that? These entities cannot be fully described by classical physics; they require quantum physics for a good understanding. This is provided by quantum electrodynamics. An electron is an excitation of a set of interacting fields called Dirac field and electromagnetic field. It can be localized in space, but it cannot be perfectly localized at a point; as one attempts to do that one encounters large fields which in turn lead to electron-positron pair creation. On the other hand, the electron does not have spatial structure itself, at least on distance scales down to femtometres which have been probed in collision experiments, but it does possess a fixed charge. Such an entity really cannot be described by classical mechanics at that distance scale. However, the behaviour of electrons at larger distance scales, larger than than atoms for example, can be modeled reasonably well by treating them as small spheres---but not as point particles. This is just a model, but it is better than no model at all. A suitable size to pick for the small sphere is the radius given by the above formula:$$r_c = \frac{e^2}{4\pi\epsilon_0 m c^2}$$This is called the
classical radius of the electron, value about $2.8 \times 10^{-15}$ m.
Refs. I am referring to my own work here because it is the best resolution of the issues that I am aware of. The first paper below begins with a short review of other literature. The second paper is very readable I think and will interest anyone interested in these issues.
A. Steane,
Reduced-order Abraham-Lorentz-Dirac equation and the consistency of classical electromagnetism, arXiv:1402.1106 doi 10.1119/1.4897951
A. Steane,
Tracking the radiation reaction energy when charged bodies accelerate, arXiv:1408.1349, doi 10.1119/1.4914421 |
Learning Objectives
Describe the physics of rolling motion without slipping Explain how linear variables are related to angular variables for the case of rolling motion without slipping Find the linear and angular accelerations in rolling motion with and without slipping Calculate the static friction force associated with rolling motion without slipping Use energy conservation to analyze rolling motion
Rolling motion is that common combination of rotational and translational motion that we see everywhere, every day. Think about the different situations of wheels moving on a car along a highway, or wheels on a plane landing on a runway, or wheels on a robotic explorer on another planet. Understanding the forces and torques involved in
rolling motion is a crucial factor in many different types of situations.
For analyzing rolling motion in this chapter, refer to Figure 10.20 in Fixed-Axis Rotation to find moments of inertia of some common geometrical objects. You may also find it useful in other calculations involving rotation.
Rolling Motion without Slipping
People have observed rolling motion without slipping ever since the invention of the wheel. For example, we can look at the interaction of a car’s tires and the surface of the road. If the driver depresses the accelerator to the floor, such that the tires spin without the car moving forward, there must be kinetic friction between the wheels and the surface of the road. If the driver depresses the accelerator slowly, causing the car to move forward, then the tires roll without slipping. It is surprising to most people that, in fact, the bottom of the wheel is at rest with respect to the ground, indicating there must be static friction between the tires and the road surface. In Figure 11.2, the bicycle is in motion with the rider staying upright. The tires have contact with the road surface, and, even though they are rolling, the bottoms of the tires deform slightly, do not slip, and are at rest with respect to the road surface for a measurable amount of time. There must be static friction between the tire and the road surface for this to be so.
To analyze rolling without slipping, we first derive the linear variables of velocity and acceleration of the center of mass of the wheel in terms of the angular variables that describe the wheel’s motion. The situation is shown in Figure 11.3.
From Figure 11.3(a), we see the force vectors involved in preventing the wheel from slipping. In (b), point P that touches the surface is at rest relative to the surface. Relative to the center of mass, point P has velocity −R\(\omega \hat{i}\), where R is the radius of the wheel and \(\omega\) is the wheel’s angular velocity about its axis. Since the wheel is rolling, the velocity of P with respect to the surface is its velocity with respect to the center of mass plus the velocity of the center of mass with respect to the surface:
$$\vec{v}_{P} = -R \omega \hat{i} + v_{CM} \hat{i} \ldotp$$
Since the velocity of P relative to the surface is zero, v
P = 0, this says that
$$v_{CM} = R \omega \ldotp \label{11.1}$$
Thus, the velocity of the wheel’s center of mass is its radius times the angular velocity about its axis. We show the correspondence of the linear variable on the left side of the equation with the angular variable on the right side of the equation. This is done below for the linear acceleration.
If we differentiate Equation 11.1 on the left side of the equation, we obtain an expression for the linear acceleration of the center of mass. On the right side of the equation, R is a constant and since \(\alpha = \frac{d \omega}{dt}\), we have
$$a_{CM} = R \alpha \ldotp \label{11.2}$$
Furthermore, we can find the distance the wheel travels in terms of angular variables by referring to Figure 11.4. As the wheel rolls from point A to point B, its outer surface maps onto the ground by exactly the distance traveled, which is d
CM.
We see from Figure 11.4 that the length of the outer surface that maps onto the ground is the arc length R\(\theta\). Equating the two distances, we obtain
$$d_{CM} = R \theta \ldotp \label{11.3}$$
Example \(\PageIndex{2}\): Rolling Down an Inclined Plane
A solid cylinder rolls down an inclined plane without slipping, starting from rest. It has mass m and radius r. (a) What is its acceleration? (b) What condition must the coefficient of static friction \(\mu_{S}\) satisfy so the cylinder does not slip?
Strategy
Draw a sketch and free-body diagram, and choose a coordinate system. We put x in the direction down the plane and y upward perpendicular to the plane. Identify the forces involved. These are the normal force, the force of gravity, and the force due to friction. Write down Newton’s laws in the x- and y-directions, and Newton’s law for rotation, and then solve for the acceleration and force due to friction.
Solution The free-body diagram and sketch are shown in Figure 11.5, including the normal force, components of the weight, and the static friction force. There is barely enough friction to keep the cylinder rolling without slipping. Since there is no slipping, the magnitude of the friction force is less than or equal to \(\mu_{S}\)N. Writing down Newton’s laws in the x- and y-directions, we have
$$\sum F_{x} = ma_{x};\; \sum F_{y} = ma_{y} \ldotp$$
Substituting in from the free-body diagram
$$\begin{split} mg \sin \theta - f_{S} & = m(a_{CM}) x, \\ N - mg \cos \theta & = 0, \\ f_{S} & \leq \mu_{S} N, \end{split}$$
we can then solve for the linear acceleration of the center of mass from these equations:
$$(a_{CM})_{x} = g(\sin \theta - \mu_{S} \cos \theta) \ldotp$$
However, it is useful to express the linear acceleration in terms of the moment of inertia. For this, we write down Newton’s second law for rotation,
$$\sum \tau_{CM} = I_{CM} \alpha \ldotp$$
The torques are calculated about the axis through the center of mass of the cylinder. The only nonzero torque is provided by the friction force. We have
$$f_{S} r = I_{CM} \alpha \ldotp$$
Finally, the linear acceleration is related to the angular acceleration by
$$(a_{CM})_{x} - r \alpha \ldotp$$
These equations can be used to solve for a
CM, \(\alpha\), and f S in terms of the moment of inertia, where we have dropped the x-subscript. We write a CM in terms of the vertical component of gravity and the friction force, and make the following substitutions.
$$a_{CM} = g \sin \theta - \frac{f_{S}}{m}$$
$$f_{S} = \frac{I_{CM} \alpha}{r} = \frac{I_{CM} a_{CM}}{r^{2}}$$
From this we obtain
$$\begin{split} a_{CM} & = g \sin \theta - \frac{I_{CM} a_{CM}}{mr^{2}}, \\ & = \frac{mg \sin \theta}{m + \left(\dfrac{I_{CM}}{r^{2}}\right)} \ldotp \end{split}$$
Note that this result is independent of the coefficient of static friction, \(\mu_{S}\).
Since we have a solid cylinder, from Figure 10.20, we have I
CM = \(\frac{mr^{2}}{2}\) and
$$a_{CM} = \frac{mg \sin \theta}{m + \left(\dfrac{mr^{2}}{2r^{2}}\right)} = \frac{2}{3} g \sin \theta \ldotp$$
Therefore, we have
$$\alpha = \frac{a_{CM}}{r} = \frac{2}{3r} g \sin \theta \ldotp$$
Because slipping does not occur, f S≤ \(\mu_{S}\)N. Solving for the friction force, $$f_{S} = I_{CM} \frac{\alpha}{r} = I_{CM} \frac{(a_{CM})}{r^{2}} = \left(\dfrac{I_{CM}}{r^{2}}\right) \left(\dfrac{mg \sin \theta}{m + \left(\dfrac{I_{CM}}{r^{2}}\right)}\right) = \frac{mg I_{CM} \sin \theta}{mr^{2} + I_{CM}} \ldotp$$Substituting this expression into the condition for no slipping, and noting that N = mg cos \(\theta\), we have $$\frac{mg I_{CM} \sin \theta}{mr^{2} + I_{CM}} \leq \mu_{S} mg \cos \theta$$or $$\mu_{S} \geq \frac{\tan \theta}{1 + \left(\dfrac{mr^{2}}{I_{CM}}\right)} \ldotp$$For the solid cylinder, this becomes $$\mu_{S} \geq \frac{\tan \theta}{1 + \left(\dfrac{2mr^{2}}{mr^{2}}\right)} = \frac{1}{3} \tan \theta \ldotp$$ Significance The linear acceleration is linearly proportional to sin \(\theta\). Thus, the greater the angle of the incline, the greater the linear acceleration, as would be expected. The angular acceleration, however, is linearly proportional to sin \(\theta\) and inversely proportional to the radius of the cylinder. Thus, the larger the radius, the smaller the angular acceleration. For no slipping to occur, the coefficient of static friction must be greater than or equal to \(\frac{1}{3}\)tan \(\theta\). Thus, the greater the angle of incline, the greater the coefficient of static friction must be to prevent the cylinder from slipping.
Exercise \(\PageIndex{2}\)
A hollow cylinder is on an incline at an angle of 60°. The coefficient of static friction on the surface is \(\mu_{S}\) = 0.6. (a) Does the cylinder roll without slipping? (b) Will a solid cylinder roll without slipping?
It is worthwhile to repeat the equation derived in this example for the acceleration of an object rolling without slipping:
$$a_{CM} = \frac{mg \sin \theta}{m + \left(\dfrac{I_{CM}}{r^{2}}\right)} \ldotp \label{11.4}$$
This is a very useful equation for solving problems involving rolling without slipping. Note that the acceleration is less than that for an object sliding down a frictionless plane with no rotation. The acceleration will also be different for two rotating cylinders with different rotational inertias.
Rolling Motion with Slipping
In the case of rolling motion with slipping, we must use the coefficient of kinetic friction, which gives rise to the kinetic friction force since static friction is not present. The situation is shown in Figure 11.6. In the case of slipping, v
CM − R\(\omega\) ≠ 0, because point P on the wheel is not at rest on the surface, and v P ≠ 0. Thus, \(\omega\) ≠ \(\frac{v_{CM}}{R}\), \(\alpha \neq \frac{a_{CM}}{R}\).
Example \(\PageIndex{2}\): Rolling Down an Inclined Plane with Slipping
A solid cylinder rolls down an inclined plane from rest and undergoes slipping (Figure 11.7). It has mass m and radius r. (a) What is its linear acceleration? (b) What is its angular acceleration about an axis through the center of mass?
Strategy
Draw a sketch and free-body diagram showing the forces involved. The free-body diagram is similar to the no-slipping case except for the friction force, which is kinetic instead of static. Use Newton’s second law to solve for the acceleration in the x-direction. Use Newton’s second law of rotation to solve for the angular acceleration.
Solution
The sum of the forces in the y-direction is zero, so the friction force is now f
k = \(\mu_{k}\)N = \(\mu_{k}\)mg cos \(\theta\). Newton’s second law in the x-direction becomes
$$\sum F_{x} = ma_{x},$$
$$mg \sin \theta - \mu_{k} mg \cos \theta = m(a_{CM})_{x},$$
or
$$(a_{CM})_{x} = g(\sin \theta - \mu_{k} \cos \theta) \ldotp$$
The friction force provides the only torque about the axis through the center of mass, so Newton’s second law of rotation becomes
$$\sum \tau_{CM} = I_{CM} \alpha,$$
$$f_{k} r = I_{CM} \alpha = \frac{1}{2} mr^{2} \alpha \ldotp$$
Solving for \(\alpha\), we have
$$\alpha = \frac{2f_{k}}{mr} = \frac{2 \mu_{k} g \cos \theta}{r} \ldotp$$
Significance
We write the linear and angular accelerations in terms of the coefficient of kinetic friction. The linear acceleration is the same as that found for an object sliding down an inclined plane with kinetic friction. The angular acceleration about the axis of rotation is linearly proportional to the normal force, which depends on the cosine of the angle of inclination. As \(\theta\) → 90°, this force goes to zero, and, thus, the angular acceleration goes to zero.
Conservation of Mechanical Energy in Rolling Motion
In the preceding chapter, we introduced rotational kinetic energy. Any rolling object carries rotational kinetic energy, as well as translational kinetic energy and potential energy if the system requires. Including the gravitational potential energy, the total mechanical energy of an object rolling is
$$E_{T} = \frac{1}{2} mv^{2}_{CM} + \frac{1}{2} I_{CM} \omega^{2} + mgh \ldotp$$
In the absence of any nonconservative forces that would take energy out of the system in the form of heat, the total energy of a rolling object without slipping is conserved and is constant throughout the motion. Examples where energy is not conserved are a rolling object that is slipping, production of heat as a result of kinetic friction, and a rolling object encountering air resistance.
You may ask why a rolling object that is not slipping conserves energy, since the static friction force is nonconservative. The answer can be found by referring back to Figure 11.3. Point P in contact with the surface is at rest with respect to the surface. Therefore, its infinitesimal displacement d\(\vec{r}\) with respect to the surface is zero, and the incremental work done by the static friction force is zero. We can apply energy conservation to our study of rolling motion to bring out some interesting results.
Example \(\PageIndex{3}\): Curiosity Rover
The
Curiosity rover, shown in Figure 11.8, was deployed on Mars on August 6, 2012. The wheels of the rover have a radius of 25 cm. Suppose astronauts arrive on Mars in the year 2050 and find the now-inoperative Curiosity on the side of a basin. While they are dismantling the rover, an astronaut accidentally loses a grip on one of the wheels, which rolls without slipping down into the bottom of the basin 25 meters below. If the wheel has a mass of 5 kg, what is its velocity at the bottom of the basin? Strategy
We use mechanical energy conservation to analyze the problem. At the top of the hill, the wheel is at rest and has only potential energy. At the bottom of the basin, the wheel has rotational and translational kinetic energy, which must be equal to the initial potential energy by energy conservation. Since the wheel is rolling without slipping, we use the relation v
CM = r\(\omega\) to relate the translational variables to the rotational variables in the energy conservation equation. We then solve for the velocity. From Figure 11.8, we see that a hollow cylinder is a good approximation for the wheel, so we can use this moment of inertia to simplify the calculation. Solution
Energy at the top of the basin equals energy at the bottom:
$$mgh = \frac{1}{2} mv_{CM}^{2} + \frac{1}{2} I_{CM} \omega^{2} \ldotp$$
The known quantities are I
CM = mr 2, r = 0.25 m, and h = 25.0 m.
We rewrite the energy conservation equation eliminating \(\omega\) by using \(\omega\) = v
CMr. We have
$$mgh = \frac{1}{2} mv_{CM}^{2} + \frac{1}{2} mr^{2} \frac{v_{CM}^{2}}{r^{2}}$$
or
$$gh = \frac{1}{2} v_{CM}^{2} + \frac{1}{2} v_{CM}^{2} \Rightarrow v_{CM} = \sqrt{gh} \ldotp$$
On Mars, the acceleration of gravity is 3.71 m/s
2, which gives the magnitude of the velocity at the bottom of the basin as
$$v_{CM} = \sqrt{(3.71\; m/s^{2})(25.0\; m)} = 9.63\; m/s \ldotp$$
Significance
This is a fairly accurate result considering that Mars has very little atmosphere, and the loss of energy due to air resistance would be minimal. The result also assumes that the terrain is smooth, such that the wheel wouldn’t encounter rocks and bumps along the way.
Also, in this example, the kinetic energy, or energy of motion, is equally shared between linear and rotational motion. If we look at the moments of inertia in Figure 10.20, we see that the hollow cylinder has the largest moment of inertia for a given radius and mass. If the wheels of the rover were solid and approximated by solid cylinders, for example, there would be more kinetic energy in linear motion than in rotational motion. This would give the wheel a larger linear velocity than the hollow cylinder approximation. Thus, the solid cylinder would reach the bottom of the basin faster than the hollow cylinder.
Contributors
Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0). |
Definition:Reflexive Closure Contents Definition The reflexive closure of $\mathcal R$ is denoted $\mathcal R^=$, and is defined as: $\mathcal R^= := \mathcal R \cup \set {\tuple {x, x}: x \in S}$
That is:
$\mathcal R^= := \mathcal R \cup \Delta_S$
where $\Delta_S$ is the diagonal relation on $S$.
The reflexive closure of $\mathcal R$ is denoted $\mathcal R^=$.
The
reflexive closure of $\mathcal R$ is denoted $\mathcal R^=$, and is defined as: $\mathcal R^= := \bigcap \mathcal Q$
That is:
Equivalence of Definitions Also see Results about reflexive closurescan be found here. |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
Circle
The functions \(\sin (\theta)\) and \(\cos (\theta)\) are defined such that, if you drew a circle of radius \(r\) with a triangle inset like the one below, the lengths of the sides will have the listed values:
As the picture suggests, \(\sin \theta\) and \(\cos \theta\) have values that repeat when you increase or decrease \(\theta\) by increments of \(2 \pi\).
Using the figure above and the Pythagorean Theorem, we can write
\[(r\sin\theta)^2 + (r\cos\theta)^2 = r^2\]
Dividing the whole equation by
\(r^2\) we obtain the useful result:
\[\sin^2\theta + \cos^2\theta = 1\]
where \(\sin^2\theta\) is just a short way of writing \((\sin\theta)^2\). The above equation is true for
any value of \(\theta\).
Using the above results, we can also derive these equations:
\[\tan\theta = \dfrac{\sin\theta}{\cos\theta}\]
\[\sin\theta = \pm\sqrt{1-\cos^2\theta}\]
\[\cos\theta = \pm\sqrt{1-\sin^2\theta}\]
Trigonometric Identities
\[\sin A + \sin B = 2\sin\left(\dfrac{A + B}{2}\right)\cos\left(\dfrac{A - B}{2}\right)\]
\[\cos A + \cos B = 2\cos\left(\dfrac{A + B}{2}\right)\cos\left(\dfrac{A - B}{2}\right)\]
\[\sin(A + B) = \sin A \cos B + \sin B \cos A\]
\[\cos(A + B) = \cos A \cos B - \sin A \sin B\]
\[\tan(A+B) = \dfrac{\tan A + \tan B}{1 - \tan A\tan B}\]
Small-Angle Approximation
If \(\theta\), expressed in radians, is close to zero, then we can approximate:
\[\sin\theta \approx \theta\]
\[\cos\theta \approx 1\]
\[\tan\theta \approx \theta\] |
I don’t think we’re clear on what simulation is NOT. RANDOMNESS IS NOT NECESSARY, for the simple reason randomness is merely a state of knowledge. Hence this classic post from 12 June 2017.
“Let me get this straight. You said
what makes your car go?”
“You heard me. Gremlins.”
“Grelims make your car go.”
“Look, it’s obvious. The cars runs, doesn’t it? It has to run for some reason, right? Everybody says that reason is gremlins. So it’s gremlins. No, wait. I know what you’re going to say. You’re going to say I don’t know why gremlins make it go, and you’re right, I don’t. Nobody does. But it’s gremlins.”
“And if I told you instead your car runs by a purely mechanical process, the result of internal combustion causing movement through a complex but straightforward process, would that interest you at all?”
“No. Look, I don’t care. It runs and that it’s gremlins is enough explanation for me. I get where I want to go, don’t I? What’s the difference if it’s gremlins or whatever it is you said?”
MCMC
That form of reasoning is used by defenders of simulations, a.k.a. Monte Carlo or MCMC methods (the other MC is for Markov Chain), in which gremlins are replaced by “randomness” and “draws from distributions.” Like the car run by gremlins, MCMC methods get you where you want to go, so why bother looking under the hood for more complicated explanations? Besides, doesn’t everybody agree simulations work by gremlins—I mean, “randomness” and “draws”?
Here is an abbreviated example from
Uncertainty which proves it’s a mechanical process and not gremlins or randomness that accounts for the succeess of MCMC methods.
First let’s use gremlin language to describe a simple MCMC example. Z, I say, is “distributed” as a standard normal, and I want to know the probability Z is less than -1. Now the normal distribution is not an analytic equation, meaning I cannot just plug in numbers and calculate an answer. There are, however, many excellent approximations to do the job near enough, meaning I can with ease calculate this probability to reasonable accuracy. The R software does so by typing
pnorm(-1), and which gives -0.1586553. This gives us something to compare our simulations to.
I could also get at the answer using MCMC. To do so I randomly—recall we’re using gremlin language—simulate a large number of draws from a standard normal, and count how many of these simulations are less than -1. Divide that number by the total number of simulations, and there is my approximation to the probability. Look into the literature and you will discover all kinds of niceties to this procedure (such as computing how accurate the approximation is, etc.), but this is close enough for us here. Use the following self-explanatory R code:
n = 10000 z = rnorm(n) sum(z < -1)/n
I get 0.158, which is for applications not requiring accuracy beyond the third digit peachy keen. Play around with the size of n: e.g., with n = 10, I get for one simulation 0.2, which is not so hot. In gremlin language, the larger the number of draws the closer will the approximation "converge" to the right answer.
All MCMC methods are the same as this one in spirit. Some can grow to enormous complexity, of course, but the base idea, the philosophy, is all right here. The approximation is seen as legitimate not just because we can match it against an near-analytic answer, because we can't do that for any situation of real interest (if we could, we wouldn't need simulations!). It is seen as legitimate because of the
way the answer was produced. Random draws imbued the structure of the MCMC "process" with a kind of mystical life. If the draws weren't random---and never mind defining what random really means---the approximation would be off, somehow, like in a pagan ceremony where somebody forgot to light the black randomness candle.
Of course, nobody speaks in this way. Few speak of the process at all, except to say it was gremlins; or rather, "randomness" and "draws". It's stranger still because the "randomness" is all computer-generated, and it is known computer-generated numbers aren't "truly" random. But, somehow, the whole thing still works, like the randomness candle has been swapped for a (safer!) electric version, and whatever entities were watching over the ceremony were satisfied the form has been met.
Mechanics
Now let's do the whole thing over in mechanical language and see what the differences are. By assumption, we want to quantify our uncertainty in Z using a standard normal distribution. We seek Pr(Z < -1 | assumption). We do
not say Z "is normally distributed", which is gremlin talk. We say our uncertainty in Z is represented using this equation by assumption.
One popular way of "generating normals" (in gremlin language) is to use what's called a Box-Muller transformation. Any algorithm which needs "normals" can use this procedure. It starts by "generating" two "random independent uniform" numbers U_1 and U_2 and then calculating this creature:
Z = \sqrt{-2 \ln U_1} \cos(2 \pi U_2),
where Z is now said to be "standard normally distributed." We don't need to worry about the math, except to notice that it is written as a causal, or rather determinative, proposition: ``If U_1 is this and U_2 is that, Z is this
with certainty." No uncertainty enters here; U_1 and U_2 determine Z. There is no life to this equation; it is (in effect) just an equation which translates a two-dimensional straight line on the interval 0 to 1 (in 2-D) to a line with a certain shape which runs from negative infinity to positive infinity.
To get the transformation, we simply write down all the numbers in the paired sequence (0.01, 0.01), (0.01, 0.02), ..., (0.99, 0.99). The decision to use two-digit accuracy was mine, just as I had to decide n above. This results in a sequence of pairs of numbers (U_1, U_2) of length 9801. For each pair, we apply the determinative
mapping of (U_1, U_2) to produce Z as above, which gives (3.028866, 3.010924, ..., 1.414971e-01). Here is the R code (not written for efficiency, but transparency): ep = 0.01 # the (st)ep u1 = seq(ep, 1-ep, by = ep) # gives 0.01, 0.02, ..., 0.99 u2 = u1
z = NA # start with an empty vector
k = 0 # just a counter for (i in u1){ for (j in u2){ k = k + 1 z[k] = sqrt(-2*log(i))*cos(2*pi*j) # the transformation } } z[1:10] # shows the first 10 numbers of z
The first 10 numbers of Z map to the pairs (0.01, 0.01), (0.02, 0.01), (0.03, 0.01), ..., (0.10, 0.01). There is nothing at all special about the order in which the (U_1, U_2) pairs are input. In the end, as long as the "grid" of numbers implied by the loop are fed into the formula, we'll have our Z. We do
not say U_1 and U_2 are "independent". That's gremlin talk. We speak of Z is purely causal terms. If you like, try this:
plot(z)
We have not "drawn" from any distribution here, neither uniform or normal. All that has happened is some perfectly simple math. And there is nothing "random". Everything is determined, as shown. The mechanical approximation is got the same way:
sum(z < -1)/length(z) # the denominator counts the size of z
which gives 0.1608677, which is a tad high. Try lowering ep, which is to say, try increasing the step resolution and see what that does. It is important to recognize the mechanical method will
always give the same answer (with same inputs) regardless of how many times we compute it. Whereas the MCMC method above gives different numbers. Why? Gremlins slain
Here is the gremlin R code, which first "draws" from "uniforms", and then applies the transformation. The ".s" are to indicate simulation.
n = 10000
u1.s = runif(n) u2.s = runif(n) z.s = sqrt(-2*log(u1.s))*cos(2*pi*u2.s) sum(z.s < -1)/n
The first time I ran this, I got 0.1623, which is much worse than the mechanical, but the second I got 0.1589 which is good. Even in the gremlin approach, though, there is no "draw" from a normal. Our Z is still absolutely
determined from the values of (u1.s, u2.s). That is, even in the gremlin approach, there is at least one mechanical process: calculating Z. So what can we say about (u1.s, u2.s)?
Here is where it gets interesting. Here is a plot of the empirical cumulative distribution of U_1 values from the mechanical procedure, overlaid with the ECDF of u1.s in red. It should be obvious the plots for U_2 and u2.s will be similar (but try!). Generate this yourself with the following code:
plot(ecdf(u1),xlab="U_1 values", ylab="Probability of U1 < value", xlim=c(0,1),pch='.') lines(ecdf(u1.s), col=2) abline(0,1,lty=2)
The values of U_1 are a rough step function; after all, there are only 99 values, while u1.s is of length n = 10000.
Do you see it yet? The gremlins have almost disappeared! If you don't see it---and do try and figure it out before reading further---try this code:
sort(u1.s)[1:20]
This gives the first 20 values of the "random" u1.s sorted from low to high. The values of U_1 were 0.01, 0.02, ... automatically sorted from low to high.
Do you see it yet? All u1.s is is a series of ordered numbers on the interval from 1-e6 to 1 - 1e-6. And the same for u2.s. (The 1e-6 is R's native display resolution for this problem; this can be adjusted.)
And the same for U_1 and U_2, except the interval is a mite shorter! What we have are nothing but ordinary sequences of numbers from (roughly) 0 to 1! Do you have it?
The answer is: The gremlin procedure is identical to the mechanical!
Everything in the MCMC method was just as fixed and determined as the other mechanical method. There was nothing random, there were no draws. Everything was simple calculation, relying on an
analytic formula somebody found that mapped two straight lines to one crooked one. But the MCMC method hides what's under the hood. Look at this plot (with the plot screen maximized; again, this is for transparency not efficiency):
plot(u1.s,u2.s, col=2, xlab='U 1 values',ylab='U 2 values')
u1.v = NA; u2.v = NA k = 0 for (i in u1){ for (j in u2){ k = k + 1 u1.v[k] = i u2.v[k] = j } } points(u1.v,u2.v,pch=20) # these are (U_1, U_2) as one long vector of each
The black dots are the (U_1, U_2) pairs and the red the (u1.s, u2.s) pairs fed into the Z calculation. The mechanical is a regular gird and the MCMC-mechanical is also a (rougher) grid. So it's no wonder they give the same (or similar) answers: they are doing the same things.
The key is that the u1.s and u2.s themselves were produced by a purely mechanical process as well. R uses a formula no different in spirit for Z above, which if fed the same numbers always produces the same output (stick in known W which
determines u1.s, etc.). The formula is called a "pseudorandom number generator", whereby "pseudorandom" they mean not random; purely mechanical. Everybody knows this, and everybody knows this, too: there is no point at which "randomness" or "draws" ever comes into the picture. There are no gremlins anywhere.
Now I do not and in no way claim that this grunt-mechanical, rigorous-grid approach is the way to handle all problems or that it is the most efficient. And I do not say the MCMC car doesn't get us where we are going. I am saying, and it is true, there are no gremlins. Everything is a determinate, mechanical process.
So what does that mean? I'm glad you asked. Let's let the late-great ET Jaynes give the answer. "It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought."
We can believe in gremlins if we like, but we can do better if we understand how the engine really works.
There's lots more details, like the error of approximation and so forth, which I'll leave to
Uncertainty (which does not have any code). Bonus code
The value of -1 was nothing special. We can see the mechanical and MCMC procedures produce normal distributions which match almost everywhere. To see that, try this code:
plot(ecdf(z),xlab="Possible values of Z", ylab="Probability of Z < value", main="A standard normal") s = seq(-4,4,by=ep) lines(s,pnorm(s),lty=2,col=2) lines(ecdf(z.s),lty=3,col=3)
This is the (e)cdf of the distributions: mechanical Z (black solid), gremlin (green dot-dashed), analytic approximation (red dashed). The step in the middle is from the crude step in the mechanical. Play with the limits of the axis to "blow up" certain sections of the picture, like this:
plot(ecdf(z),xlab="Possible values of Z", ylab="Probability of Z < value", main="A standard normal", xlim=c(-1,1)) s = seq(-4,4,by=ep) lines(s,pnorm(s),lty=2,col=2) lines(ecdf(z.s),lty=3,col=3)
Try
xlim=c(-4,-3) too.
Homework
Find the values of U_1 and U_2 that correspond to Z = -1. Using the modern language, what can you say about these values in relation to the (conditional!) probability Z < -1? Think about the probabilities of the Us.
What other simple transforms can you find that correspond to other common distributions? Try out your own code for these transforms. |
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response)
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details
http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference |
@Rubio The options are available to me and I've known about them the whole time but I have to admit that it feels a bit rude if I act like an attribution vigilante that goes around flagging everything and leaving comments. I don't know how the process behind the scenes works but what I have done up to this point is leave a comment then wait for a while. Normally I get a response or I flag after some time has passed. I'm guessing you say this because I've forgotten to flag several times
You can always leave a friendly comment if you like, but flagging gets eyes on it to get the problem addressed - ideally before people start answering it. something we don't want is for people to farm rep off someone else's content, which we see occasionally; but even beyond that, SE in general and we in particular dislike it when people post content they didn't create without properly acknowledging its source. And most of the creative effort here is in the question.
So yeah, it's best to flag it when you see it. That'll put it into the queue for reviewers to agree (or not) - so don't worry that you're single-handedly (-footedly?) stomping on people :)
Unfortunately, a significant part of the time, the asker never supplies the origin. Sometimes they self-delete the question rather than just tell us where it came from. Other times they ignore the request and the whole thing, including whatever effort people put into answering, gets discarded when the question is deleted.
Okay. This is the first Riley I've written, and it gets progressively harder as you go along, so here goes. I wrote this, and then realized that I used a mispronunciation of the target, so I had to sloppily improvise. I apologize. Anyway, I hope you enjoy it!My prefix is just shy of white,Yet...
IBaNTsJTtStPMP means "I'm Bad at Naming Things, so Just Try to Solve this Patterned Masyu Puzzle!".The original Masyu rules apply.Make a single loop with lines passing through the centers of cells, horizontally or vertically. The loop never crosses itself, branches off, or goe...
This puzzle is based off the What is a Word™ and What is a Phrase™ series started by JLee and their spin-off What is a Number™ series.If a word conforms to a certain rule, I call it an Etienne Word™. Use the following examples to find the rule:These are not the only examples of Etienne Wo...
This puzzle is based off the What is a Word™ and What is a Phrase™ series started by JLee and their spin-off What is a Number™ series.If a word conforms to a certain rule, I call it an Eternal Word™. Use the following examples to find the rule:$$% set Title text. (spaces around the text ARE ...
IntroductionI am an enthusiastic geometry student, preparing for my first quiz. Yet while revising I accidentally spilt my coffee onto my notes. Can you rescue me and draw me a diagram so that I can revise it for tomorrow’s test? Thank you very much!My Notes
Sometimes you are this wordRemove the first letter, does not change the meaningRemove the first two letters, still feels the sameRemove the first three letters and you find a wayRemove the first four letters and you get a numberThe letters rearranged is a surnameWh...
– "Sssslither..."Brigitte jumped. The voice had whispered almost directly into her ear, yet there was nobody to be seen. She looked at the ground beneath her feet. Was something moving? She was probably imagining things again.– Did you hear something? she asked her guide, Skaylee....
The creator of this masyu forgot to add the final stone, so the puzzle remains incomplete. Finish his job for him by placing one additional stone (either black or white) on the board so that the result is a uniquely solvable masyu.Normal masyu rules apply.
So here's a standard Nurikabe puzzle.I'll be using the final (solved) grid for my upcoming local puzzle competition logo as it will spell the abbreviation of the competition name. So, what does it spell?Rules (adapted from Nikoli):Fill in the cells under the following rules....
I've designed a set of dominoes puzzles that I call Donimoes. You slide thedominoes like the cars in Nob Yoshigahara's Rush Hour puzzle, always alongtheir long axis. The goal of Blocking Donimoes is to slide all the dominoesinto a rectangle, without sliding any matching numbers next to each ot...
I am mud that will trap you. I am a colloid hydrogel. What am I?Take the first half of me and add me to this:I am dangerous to wolves and werewolves alike. Some people even say that I am dangerous to unholy things. Use the creator of Poirot to find out: What am I?Now, take another word for ...
Clark who is consecutive in nature, lives in California near the 100th street. Today he decided to take his palindromic boat and visit France. He booked a room which has a number of thrice a prime. Then he ordered Taco and Cola for his breakfast. The online food delivery site asked him to enter t...
Suppose you are sitting comfortably in your universe admiring the word SING. Just then, Q enters your universe and insists that you insert the string "IMMER" into your precious word to create a new word for his amusement.Okay, you can make the word IMMERSING...But then you realize, you can a...
You! I see you walking thereNary a worry or a careCome, listen to me speakMy mind is strong, though my body is weak.I've got a riddle for you to ponderSomething to think about whilst you wanderIt's a classic Riley, a word split in threeFor a prefix, an...
@OmegaKrypton rather a poor solution, I think, but I'll try it anyway: Quarrel= cross words. When combined heartlessly: put them together by removing the middle space. Thus, crosswords. Nonstop: remove the final letter. We've made crossword = feature in daily newspaper
I saw this photo on LinkedIn:Is this a puzzle? If so, what does it mean and what is a solution?What I've found so far:$a = \pi r^2$ is clearly the area of a disk of radius $r$$2\pi r$ is clearly its diameter$\displaystyle \int\dfrac{dx}{sin\ x} = ln\left(\left| tan \dfrac{x}{2}\right|\... |
Consider the reaction scheme:
$$\ce{S + E ->[$k_1$] C1} \qquad \ce{C1 ->[$k_2$] E + P} \qquad \ce{S + C1 <=>[$k_3$][$k_4$]C2}$$
where $\ce{S}$ is the substrate, $\ce{E}$ is the enzyme, $\ce{P}$ is the product, $\ce{C1}$ and $\ce{C2}$ are enzyme substrate complexes. Let $[\ce{S}] = s$, $[\ce{E}] = e$, $[\ce{C1}] = c_1$, $[\ce{C2}] = c_2$ and $[\ce{P}] = p$ be the concentrations of each respective chemical. I have simplified this system down to
\begin{align} \frac{\mathrm ds}{\mathrm dt} &= -k_1se_0 + (k_1-k_3)sc_1 + (k_1s+k_4)c_2 \\ \frac{\mathrm dc_1}{\mathrm dt} &= k_1se_0 - (k_1s+k_2+k_3s)c_1+(k_4-k_1s)c_2 \\ \frac{\mathrm dc_2}{\mathrm dt} &= k_3sc_1-k_4c_2 \\ \end{align}
using the conservation equation $e=e_0-c_1-c_2$. I have found that $p(t) = k_2\int c_1(t) \,\mathrm dt$. Now I need to use the quasi-steady state hypothesis to show that $$\frac{\mathrm ds}{\mathrm dt}= - f(s),\qquad f(s) = \frac{k_1e_0s}{1+\frac{k_1}{k_2}s + \frac{k_1k_3}{k_2k_4}s^2}.$$
Now I'm not really given much information on this hypothesis. I have been told it means
We assume that the initial stage of complex formation is very fast. After which it is essentially at equilibrium.
So how do I apply the hypothesis to transform $\mathrm ds/\mathrm dt$? |
Definition:Zero (Number) Contents Definition
Let $\left({S, \circ, \preceq}\right)$ be a naturally ordered semigroup.
That is:
$\forall n \in S: 0 \preceq n$ Natural Numbers Integers Rational Numbers Real Numbers
Let $\C$ denote the set of complex numbers.
The
zero of $\C$ is the complex number: $0 + 0 i$ Also known as
The Babylonians from the $2$nd century BCE used a number base system of arithmetic, with a placeholder to indicate that a particular place within a number was empty, but its use was inconsistent. However, they had no actual recognition of zero as a mathematical concept in its own right.
The Ancient Greeks had no conception of zero as a number. However, even then there were reservations about its existence, and misunderstanding about how it behaved.
In
Ganita Sara Samgraha of Mahāvīrāchārya, c. $850$ CE appears: A number multiplied by zero is zero and that number remains unchanged which is divided by, added to or diminished by zero. It was not until the propagation of Arabic numbers, where its use as a placeholder made it important, that it became commonplace.
The Sanskrit word used by the early Indian mathematicians for zero was
sunya, which means empty, or blank.
In Arabic this was translated as
sifr.
This was translated via the Latin
zephirum into various European languages as zero, cifre, cifra, and into English as zero and cipher. Sources 1960: Walter Ledermann: Complex Numbers... (previous) ... (next): $\S 1.1$. Number Systems 1974: Murray R. Spiegel: Theory and Problems of Advanced Calculus(SI ed.) ... (previous) ... (next): Chapter $1$: Numbers: Real Numbers: $2$ 1975: T.S. Blyth: Set Theory and Abstract Algebra: $\S 8$ 1981: Murray R. Spiegel: Theory and Problems of Complex Variables(SI ed.) ... (previous) ... (next): Chapter $1$: Complex Numbers: The Real Number System: $2$ 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $-1$ and $i$ 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $-1$ and $i$ 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: zero: 1. 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: zero |
During summer 2019, I was invited in the Department of Applied Mathematics and Theoretical Physics of the University of Cambridge. I worked with Noemie Debroux and Angelica I. Aviles-Rivero on MRI reconstruction.
The Project
Due to technical difficulties, MRI data are incomplete, thus to recover a full image of the patient, we have to develop technique to extrapolate from this set of incomplete data. If you try to reconstruct the MRI images by inverse Fourier transform you get this result:
Reference images Inverse Fourier Transform reconstruction
This method can be seen as a minimization of the least square problem
where we have,
$\mathbf{m}$: The Image sequence we want to reconstruct, $K$ : The Fourier undersampling operator, $f$ : The data we possess,
The reconstruction is not that bad (in fact $\text{ssim} = 0.82$) clearly imperfect. We can improve this result by adding prior knowlegde of the data, such as sparse representation.
Traditional methods
We know that MRI images are sparse in the Wavelet domain. Moreover if we assume that MRI images are piecewise constant, traditional variational methods tend to minimize the following energy
Hence this energy try to minimize the fidelity of the reconstruction with a constraint on the TV norm of each images and the sparsity of the images in the wavelet domain. This method improve the result on only inverse Fourier transform reconstruction.
**Reconstruction by TV-L1 method with $\lambda_1=$ and $\lambda_2=$ ** Improvement
We can improve this reconstruction by using the temporal information. In deed, MRI data came from a sequence so there is a correlation between two consecutive reconstructed frames. This correlation is given by the optical flow. Assume we have a method to reconstruct the optical flow (such as TV-L1 method), we propose to minimize the following energy:
Algorithm
We can describe the method tho minize this energy. The method is based on Chambolle & Pock Algorithm. The algorithm is a primal dual iteration, let $\mathbf{y}$ be the collection of dual variables.
Chambolle Algorithm solve the following saddle point problem with $C=\begin{pmatrix}K \ \nabla \ \Psi \ F\end{pmatrix}^T$ and $F\mathbf{m} = \frac{\partial \mathbf{m}}{\partial t} + \nabla \mathbf{m}\cdot\mathbf{u}$.
The iteration proceed as follow,
Our contribution
The main idea of the project was to combine MRI reconstruction with sparse representation of the optical flow.
Tourism
During this two month, I enjoyed walking in the old street of Cambridge. It was impressive to be in the city where DNA was discover,
Mathematical Bridge |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Hello guys! I was wondering if you knew some books/articles that have a good introduction to convexity in the context of variational calculus (functional analysis). I was reading Young's "calculus of variations and optimal control theory" but I'm not that far into the book and I don't know if skipping chapters is a good idea.
I don't know of a good reference, but I'm pretty sure that just means that second derivatives have consistent signs over the region of interest. (That is certainly a sufficient condition for Legendre transforms.)
@dm__ yes have studied bells thm at length ~2 decades now. it might seem airtight and has stood the test of time over ½ century, but yet there is some fineprint/ loopholes that even phd physicists/ experts/ specialists are not all aware of. those who fervently believe like Bohm that no new physics will ever supercede QM are likely to be disappointed/ dashed, now or later...
oops lol typo bohm bohr
btw what is not widely appreciated either is that nonlocality can be an emergent property of a fairly simple classical system, it seems almost nobody has expanded this at length/ pushed it to its deepest extent. hint: harmonic oscillators + wave medium + coupling etc
But I have seen that the convexity is associated to minimizers/maximizers of the functional, whereas the sign second variation is not a sufficient condition for that. That kind of makes me think that those concepts are not equivalent in the case of functionals...
@dm__ generally think sampling "bias" is not completely ruled out by existing experiments. some of this goes back to CHSH 1969. there is unquestioned reliance on this papers formulation by most subsequent experiments. am not saying its wrong, think only that theres very subtle loophole(s) in it that havent yet been widely discovered. there are many other refs to look into for someone extremely motivated/ ambitious (such individuals are rare). en.wikipedia.org/wiki/CHSH_inequality
@dm__ it stands as a math proof ("based on certain assumptions"), have no objections. but its a thm aimed at physical reality. the translation into experiment requires extraordinary finesse, and the complex analysis starts with CHSH 1969. etc
While it's not something usual, I've noticed that sometimes people edit my question or answer with a more complex notation or incorrect information/formulas. While I don't think this is done with malicious intent, it has sometimes confused people when I'm either asking or explaining something, as...
@vzn what do you make of the most recent (2015) experiments? "In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft, Vienna and Boulder.
All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. This makes them “loophole-free” in the sense that all remaining conceivable loopholes like superdeterminism require truly exotic hypotheses that might never get closed experimentally."
@dm__ yes blogged on those. they are more airtight than previous experiments. but still seem based on CHSH. urge you to think deeply about CHSH in a way that physicists are not paying attention. ah, voila even wikipedia spells it out! amazing
> The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. > A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment.
↑ suspect entire general LHV theory of QM lurks in these loophole(s)! there has been very little attn focused in this area... :o
how about this for a radical idea? the hidden variables determine the probability of detection...! :o o_O
@vzn honest question, would there ever be an experiment that would fundamentally rule out nonlocality to you? and if so, what would that be? what would fundamentally show, in your opinion, that the universe is inherently local?
@dm__ my feeling is that something more can be milked out of bell experiments that has not been revealed so far. suppose that one could experimentally control the degree of violation, wouldnt that be extraordinary? and theoretically problematic? my feeling/ suspicion is that must be the case. it seems to relate to detector efficiency maybe. but anyway, do believe that nonlocality can be found in classical systems as an emergent property as stated...
if we go into detector efficiency, there is no end to that hole. and my beliefs have no weight. my suspicion is screaming absolutely not, as the classical is emergent from the quantum, not the other way around
@vzn have remained civil, but you are being quite immature and condescending. I'd urge you to put aside the human perspective and not insist that physical reality align with what you expect it to be. all the best
@dm__ ?!? no condescension intended...? am striving to be accurate with my words... you say your "beliefs have no weight," but your beliefs are essentially perfectly aligned with the establishment view...
Last night dream, introduced a strange reference frame based disease called Forced motion blindness. It is a strange eye disease where the lens is such that to the patient, anything stationary wrt the floor is moving forward in a certain direction, causing them have to keep walking to catch up with them. At the same time, the normal person think they are stationary wrt to floor. The result of this discrepancy is the patient kept bumping to the normal person. In order to not bump, the person has to walk at the apparent velocity as seen by the patient. The only known way to cure it is to remo…
And to make things even more confusing:
Such disease is never possible in real life, for it involves two incompatible realities to coexist and coinfluence in a pluralistic fashion. In particular, as seen by those not having the disease, the patient kept ran into the back of the normal person, but to the patient, he never ran into him and is walking normally
It seems my mind has gone f88888 up enough to envision two realities that with fundamentally incompatible observations, influencing each other in a consistent fashion
It seems my mind is getting more and more comfortable with dialetheia now
@vzn There's blatant nonlocality in Newtonian mechanics: gravity acts instantaneously. Eg, the force vector attracting the Earth to the Sun points to where the Sun is now, not where it was 500 seconds ago.
@Blue ASCII is a 7 bit encoding, so it can encode a maximum of 128 characters, but 32 of those codes are control codes, like line feed, carriage return, tab, etc. OTOH, there are various 8 bit encodings known as "extended ASCII", that have more characters. There are quite a few 8 bit encodings that are supersets of ASCII, so I'm wary of any encoding touted to be "the" extended ASCII.
If we have a system and we know all the degrees of freedom, we can find the Lagrangian of the dynamical system. What happens if we apply some non-conservative forces in the system? I mean how to deal with the Lagrangian, if we get any external non-conservative forces perturbs the system?Exampl...
@Blue I think now I probably know what you mean. Encoding is the way to store information in digital form; I think I have heard the professor talking about that in my undergraduate computer course, but I thought that is not very important in actually using a computer, so I didn't study that much. What I meant by use above is what you need to know to be able to use a computer, like you need to know LaTeX commands to type them.
@AvnishKabaj I have never had any of these symptoms after studying too much. When I have intensive studies, like preparing for an exam, after the exam, I feel a great wish to relax and don't want to study at all and just want to go somehwere to play crazily.
@bolbteppa the (quanta) article summary is nearly popsci writing by a nonexpert. specialists will understand the link to LHV theory re quoted section. havent read the scientific articles yet but think its likely they have further ref.
@PM2Ring yes so called "instantaneous action/ force at a distance" pondered as highly questionable bordering on suspicious by deep thinkers at the time. newtonian mechanics was/ is not entirely wrong. btw re gravity there are a lot of new ideas circulating wrt emergent theories that also seem to tie into GR + QM unification.
@Slereah No idea. I've never done Lagrangian mechanics for a living. When I've seen it used to describe nonconservative dynamics I have indeed generally thought that it looked pretty silly, but I can see how it could be useful. I don't know enough about the possible alternatives to tell whether there are "good" ways to do it. And I'm not sure there's a reasonable definition of "non-stupid way" out there.
← lol went to metaphysical fair sat, spent $20 for palm reading, enthusiastic response on my leadership + teaching + public speaking abilities, brought small tear to my eye... or maybe was just fighting infection o_O :P
How can I move a chat back to comments?In complying to the automated admonition to move comments to chat, I discovered that MathJax is was no longer rendered. This is unacceptable in this particular discussion. I therefore need to undo my action and move the chat back to comments.
hmmm... actually the reduced mass comes out of using the transformation to the center of mass and relative coordinates, which have nothing to do with Lagrangian... but I'll try to find a Newtonian reference.
One example is a spring of initial length $r_0$ with two masses $m_1$ and $m_2$ on the ends such that $r = r_2 - r_1$ is it's length at a given time $t$ - the force laws for the two ends are $m_1 \ddot{r}_1 = k (r - r_0)$ and $m_2 \ddot{r}_2 = - k (r - r_0)$ but since $r = r_2 - r_1$ it's more natural to subtract one from the other to get $\ddot{r} = - k (\frac{1}{m_1} + \frac{1}{m_2})(r - r_0)$ which makes it natural to define $\frac{1}{\mu} = \frac{1}{m_1} + \frac{1}{m_2}$ as a mass
since $\mu$ has the dimensions of mass and since then $\mu \ddot{r} = - k (r - r_0)$ is just like $F = ma$ for a single variable $r$ i.e. an spring with just one mass
@vzn It will be interesting if a de-scarring followed by a re scarring can be done in some way in a small region. Imagine being able to shift the wavefunction of a lab setup from one state to another thus undo the measurement, it could potentially give interesting results. Perhaps, more radically, the shifting between quantum universes may then become possible
You can still use Fermi to compute transition probabilities for the perturbation (if you can actually solve for the eigenstates of the interacting system, which I don't know if you can), but there's no simple human-readable interpretation of these states anymore
@Secret when you say that, it reminds me of the no cloning thm, which have always been somewhat dubious/ suspicious of. it seems like theyve already experimentally disproved the no cloning thm in some sense. |
I'm studying the holographic entanglement entropy (HEE) in this paper (Ryu-Takayanagi, 2006). In section 6.3 they compute the HEE for a segment in a 2D CFT. To do so, they obtain the corresponding geodesic in the bulk (in the Poincaré patch) and compute its length.
I understand all that process, but I'm having some trouble when they introduce the cutoff. The metric diverges when $z\to0$ so we introduce a cutoff $\epsilon>0$, I understand that. But then they say
Since $e^\rho\sim x^i/z$ near the boundary, we find $z\sim a$
Here, $\rho$ is the hiperbolic radial coordinate un the global coordinates for AdS,
$$ ds^2 = R^2(-\cosh^2\rho\ d\tau^2 + d\rho^2 + \sinh^2\rho\ d\Omega^2) $$
$x^i$ and $z$ are coordinates in the Poincaré patch,
$$ ds^2 = \frac{R^2}{z^2}(dz^2-dt^2+\sum_i(dx^i)^2) $$
And $a$ is the inverse of the UV cutoff of the CFT in the boundary, that is, the spacing between sites.
I have two problems:
1) First, I don't see why near the boundary $e^\rho\sim x^i/z$. I made up the relations between both coordinate systems and I find more complicated relations than that (even setting $z\sim0$).
2) Even assuming the previous point, I don't understand why we obtain that relation between the CFT and the $z$ cutoff. |
Definition:Factorial Contents Definition
Let $n \in \Z_{\ge 0}$ be a positive integer.
The
factorial of $n$ is defined inductively as: $n! = \begin{cases} 1 & : n = 0 \\ n \left({n - 1}\right)! & : n > 0 \end{cases}$
The
factorial of $n$ is defined as:
\(\displaystyle n!\) \(=\) \(\displaystyle \prod_{k \mathop = 1}^n k\) \(\displaystyle \) \(=\) \(\displaystyle 1 \times 2 \times \cdots \times \paren {n - 1} \times n\)
where $\displaystyle \prod$ denotes product notation.
$\begin{array}{r|r} n & n! \\ \hline 0 & 1 \\ 1 & 1 \\ 2 & 2 \\ 3 & 6 \\ 4 & 24 \\ 5 & 120 \\ 6 & 720 \\ 7 & 5 \, 040 \\ 8 & 40 \, 320 \\ 9 & 362 \, 880 \\ 10 & 3 \, 628 \, 800 \\ \end{array}$
Then we define: $\displaystyle\alpha! = \prod_{j \mathop \in J} \alpha_j!$
where the factorial on the right is a factorial of natural numbers.
Also known as
While the canonical vocalisation of, for example, $4!$ is
$4$ factorial, it can often be found referred to as $4$ bang or (usually by schoolchildren) $4$ shriek.
Some mathematicians prefer
$4$ gosh. Also see Results about factorialscan be found here.
Before that, various symbols were used whose existence is now of less importance.
Notations for $n!$ in history include the following:
$\sqbrk n$ as used by Euler $\mathop{\Pi} n$ as used by Gauss $\left\lvert {\kern-1pt \underline n} \right.$ and $\left. {\underline n \kern-1pt} \right\rvert$, once popular in England and Italy. Amongst the worst barbarisms is that of introducing symbols which are quite new in mathematical, but perfectly understood in common, language. Writers have borrowed from the Germans the abbreviation $n!$ ... which gives their pages the appearance of expressing admiration that $2$, $3$, $4$, etc., should be found in mathematical results. Sources 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $6$ 1986: David Wells: Curious and Interesting Numbers... (previous) ... (next): $24$ 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $6$ 1997: David Wells: Curious and Interesting Numbers(2nd ed.) ... (previous) ... (next): $24$ |
Looking at the MO reputation league I found that it shows up 1767 screens, all screns but the last one show up 9x4=36 users and the last screen shows 27 users. So totally MO has 1766x36+27=63 603 users.
On the other hand, in the profile of the MO leader Hamkins we can find that he is 0,02% overall. But if MO indeed has 63603 users, he should be $\frac1{63 603}\cdot 100\%\approx 0.0017\%$ overall.
The same problem with other users:
for David Speyer is written 0,04% but should be $\frac2{63 603}\cdot 100\%\approx 0,003\%$
for Joseph O'Rourke is written 0,07% but should be 0,005%
for Qiaochu Yuan is written 0,09% but should be 0,006%,
and so on...
What is the reason in such more than 10 times difference? |
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr... |
Communication, Networking, Signal and Image Processing (CS)
Question 5: Image Processing
August 2016 (Published in Jul 2019)
Problem 1 Calcualte an expression for $ \lambda_n^c $, the X-ray energy corrected for the dark current
$ \lambda_n^c=\lambda_n^b-\lambda_n^d $
Calculate an expression for $ G_n $, the X-ray attenuation due to the object's presence
$ G_n = \frac{d\lambda_n^c}{dx}=-\mu (x,y_0+n * \Delta d)\lambda_n^c $
Calculate an expression for $ \hat{P}_n $, an estimate of the integral intensity in terms of $ \lambda_n $, $ \lambda_n^b $, and $ \lambda_n^d $
$ \lambda_n = (\lambda_n^b-\lambda_n^d) e^{-\int_{0}^{x}\mu(t)dt}d)\lambda_n^c $
$ \hat{P}_n = \int_{0}^{x}\mu(t)dt= -log(\frac{\lambda_n}{\lambda_n^b-\lambda_n^d}) $
For this part, assume that the object is of constant density with $ \mu(x,y) = \mu_0 $. Then sketch a plot of $ \hat{P}_n $ versus the object thickness, $ T_n $, in mm, for the $ n^{th} $ detector. Label key features of the curve such as its slope and intersection. Problem 2 Specify the size of $ YY^t $ and $ Y^tY $. Which matrix is smaller
Y is of size $ p \times N $, so the size of $ YY^t $ is $ p \times p $
Y is of size $ p \times N $, so the size of $ Y^tY $ is $ N \times N $
Obviously, the size of $ Y^tY $ is much smaller, since N << p.
Prove that both $ YY^t $ and $ Y^tY $ are both symmetric and positive semi-definite matrices.
To prove it is symmetric:
$ (YY^t)^t = YY^t $
To prove it is positive semi-definite:
Let x be an arbitrary vector
$ x^tYY^tx = (Y^tx)^T(Y^tx) \geq 0 $ so the matrix of $ YY^t $ is positive semi-definite.
The proving procedures for $ Y^tY $ are the same
Derive expressions for $ V $ and $ \Sigma $ in terms of $ T $, and $ D $.
$ Y^tY = (U \Sigma V^t)^T U \Sigma V^t = V\Sigma^2 V^t = TDT^t $ therefore $ V =T $ and $ \Sigma = D^{\frac{1}{2}} $
Derive an expression for $ U $ in terms of $ Y $, $ T $, $ D $.
$ Y = U\Sigma V^t= UD^{\frac{1}{2}}T^t $
$ \therefore U = Y(D^{\frac{1}{2}}T^t)^{-1} $
Derive expressions for E in terms of Y, T, and D.
$ YY^t = U\Sigma V^t(U\Sigma V^t)^t= U\Sigma^2 U^t = E\Gamma E^t $
therefore
$ E = U = Y(D^{\frac{1}{2}}T^t)^{-1} $
If the columns of Y are images from a training database, then what name do we give to the columns of U?
They are called
eigenimages |
Given a modulus $n\in\mathbb{N}$ and another natural number $0<x<n$, what's an efficient algorithm to enumerate all pairs of natural numbers $(a,b)\in\mathbb{N}^2$ such that both $a$ and $b$ are less than $n$ and $$ab\equiv x \mod n\text?$$
Let's solve the problem for a specific $a$. Observe that $ab \equiv x \pmod{n}$ if and only if there exists a $m \in \mathbb{Z}$ such that
$\qquad ab + mn = x$
By the Bezout theorem, the above has solutions if and only if $x \mid (a, n)$ (the "if" arrow is the theorem, the "only if" arrow is trivial). One of those solutions can be found by the Extended Euclid algorithm, and all the others can be obtained observing that if $(b, m)$ is a solution, then
$\qquad \left( b + \frac{n}{(a, n)}, m + \frac{a}{(a, n)} \right) $
is also a solution. Repeating the above process for all $a \in \mathbb{Z}/n\mathbb{Z}$ yields all the pairs you are looking for. While it's rather naive, I don't think you can get much better than that if you're looking for all solutions. |
You have already got "practical" answers, so I intend to answer form another point of view.
There is a quite famous theorem due to Stone and von Neumann, later improved by Mackay, and finally by Dixmier and Nelson, roughly speaking establishing the following result within the most elementary version. (Another version of the theorem focuses on the unitary groups generated by $X$ and $P$ avoiding problems with domains, however I stick here to the self-adjoint operator version.)
THEOREM. (rough statement "for physicists")
If you have a couple of self-adjoint operators $X$ and $P$ defined on a Hilbert space $H$ such that are conjugated to each other:
\begin{equation}[X,P] = i \hbar I \quad\quad\quad (1)\end{equation}
and there is a cyclic vector for $X$ and $P$, then there exists a unitary operator $U : L^2(R, dx)\to H$ such that:
$$(U^{-1} X U )\psi (x)= x\psi(x)\quad \mbox{and}\quad (U^{-1} P U )\psi (x)= -i\hbar \frac{d\psi(x)}{dx}\:.\quad (2)$$
(The rigorous statement, in this Nelson-like version is reads as follows
THEOREM.
Let $X$ and $P$ be a pair of self-adjoint operators on a complex Hilbert space $H$ such that (a) they veryfy (1) on a common invariant dense subspace $S\subset H$, (b) $X^2+P^2$ is essentially self-adjoint on $S$ and (c) there is a cyclic vector in $S$ for $X$ and $P$. Then there exists a unitary operator $U : L^2(R, dx)\to H$ such that (2) are valid for $\psi \in C_0^{\infty}(R)$.
Notice that the operators defined in the right-hand sides of (2) admits unique self-adjoint extensions so they completely fix the operators representing respective observables. We can equally replace $C_0^\infty(R)$for the Schwartz space ${\cal S}(R)$ in the last statement.)
Barring technicalities, all that means that
commutation relations actually fix position and momentum observables as well as the Hilbert space. For instance, referring to Murod Abdukhakimov's answer, if the addition of $\partial f$ to the standard expressions of $X$ and $P$ gives rise to truly self-adjoint operators, then a unitary transformation (just that connecting $\psi$ to $\psi'$ in Murod Abdukhakimov's answer) gets rid of the deformation restoring the standard expression. Remember that unitary transformations do not alter all physical objects of QM.
The result extends to $R^n$, i.e., concerning particles in space for $n=3$.Dropping the irreducibility requirement (there is a cyclic vector for $X$ and $P$) the thesis holds anyway but $H$ decompose into a direct sum (not direct integral!) of closed subspaces where the strong statement is valid.
There are important consequences of this fundamental theorem. First of all $H$ must be saparable as $L^2(R,dx)$ is. Moreover no time operator $T$ (conjugated with the Hamiltonian operator $H$) exists if the Hamiltonian operator id bounded below as physics requires. The latter statement is due to the fact that the theorem fixes the spectra of $X$ and $P$ as the whole real axes in both cases, so that the spectrum of $H$ would not be bounded below if $T,H$ were a conjugated pair of operators.A similar no-go theorem arises concerning quantization of a particle on a circle when one tries to define position and impulse self-adjoint operators.The attempt to solve these no-go results gave rise to more general formulation of quantum mechanics based on the notion of POVM and eventually turned out to be very useful in other contexts as quantum information theory.
An important observation is that Stone-von Neumann - MacKay - Dixmier -Nelson's result fails when dealing with infinite dimensional systems.That is, roughly speaking, passing from the (symplectic space) of a finite number of particle to the (symplectic space) of a field. In that case the canonical commutation relations of $X_i$ and $P_j$ are replaced by those of the quantum fields. E.g:,
$$[\phi(t, x), \pi(t, y)] = i \hbar \delta(x,y) I$$
or more sophisticated versions of them. In this juncture, there exist infinitely many representations of the algebra of observables that cannot be connected by unitary operators. This is a well-known phenomenon in QFT or quantum statistical mechanics (in the thermodynamic limit).For instance the free theory and the interacting theory of a given quantum field cannot be represented in the same Hilbert space once one assumes standard requirements on states and observables (the so called Haag's theorem and this is the deep reason why LSZ formalism uses the weak topology instead of the strong one as in standard quantum theory of the scattering).
If one includes superselections charges in the algebra of observables, non unitarily equivalent representations of the algebra arise automatically giving rise to sectors.
In QFT in curved spacetime the appearance of inequivalent representations of the algebra of observables is a quite common phenomenon due to the presence of curvature of the spacetime.This post imported from StackExchange Physics at 2014-04-12 19:04 (UCT), posted by SE-user V. Moretti |
I'm not quite sure that I understand how the generalized likelihood ratio test works for composite hypotheses; observe the example below:
Let $X_1,...,X_n$ be a random sample from an exponential distribution, $X_i\sim EXP(\theta) \implies E(X_i)=\theta$. Derive the generalized likelihood ratio test of $H_0:\theta=\theta_0$ vs. $H_a: \theta>\theta_0$.
I've been able to to a good portion of the work; we know that, in this case, $\bar{X}$ is the maximum likelihood estimator of $\theta$. But here's where I'm confused. Suppose instead we had that $H_0: \theta=\theta_0$ vs. $H_a:\theta\ne\theta_0$.
If this were true instead, we would have that the likelihood ratio is given by: $$\lambda(\vec{X})=\frac{\bar{x}^n e^{-n\bar{x}/\theta_0+n}}{\theta_0^n}$$And then we would reject the null hypothesis if this value was less than some constant $c$. However, considering that a composite hypothesis is given instead, I believe that the decision rule needs to change somehow; the problem is that I don't understand how. The textbook lists the following as the final answer to the original question:
Reject $H_0$ if $2n\bar{x}/\theta_0 \ge \chi^2_{1-\alpha}(2n))$
where $\chi^2_{1-\alpha}(2n)$ represents the percentile function of the chi square distribution with $2n$ degrees of freedom. Can somebody show how to work to this solution; I don't know how they get there because this is a composite hypothesis and I don't understand what needs to be changed. |
Another application of vol surfaces right here. A poor fit indicdates strongly the likelihood of bad data - particularly when the number of strikes is high as is the case for SPX.
Since you do not need a brilliant vol surface for pricing exotics, you needn't worry about smoothness at all points or arbitrage free considerations. A quick thing is to fit your vols to a piecewise continuous parabolic functional form. Something like this:
$$\sigma(k)=\sigma_\mathrm{atm}+\beta k+\alpha k^2$$
where $k=\mathrm{log}(K/F)$. I would fit the ATM by doing a similar parabolic fit on the strikes near the money and your ATM vol will be the intercept. Then you can fit the left side of the vol smile with the parabolic form above constrained so that the intercept is $\sigma_\mathrm{atm}$. In other words, fit
$$\sigma(k)-\sigma_\mathrm{atm}=\beta_L k+\alpha_L k^2$$
Then do the same thing on the right hand side:
$$\sigma(k)-\sigma_\mathrm{atm}=\beta_R k+\alpha_R k^2$$
This gives you 5 parameters for fitting a vol smile while only doing 3 regressions - very quick operations. The five parameters are $\sigma_\mathrm{atm}, \alpha_L,\beta_L,\alpha_R,\beta_R$.
These 5 parameters can actually fit the data impressively well when the data is good and bid ask is not ridiculously wide (outside of the W shaped vol "smiles" user @LocalVolatility schooled me on). Although this is not a linear regression, I still like to use the $R^2$ formula to compare my fitted vols to the input vols. For a ticker like SPX, you will typically see an $R^2$ of 99.8 - the fit is extremely good. I also like to look at what I call the $L^1$, $L^2$, and $L^\infty$ errors for my fits. $L^1$ error is the sum of the absolute errors in vol space. $L^2$ error is the square root of the sum square errprs in the fit and $L^\infty$ error is the maximum error in the fit. On top of that, counting the number of times missing bid ask, the vega weighted number of times missing bid ask where vega is based on the vol from the fitted smile and the $L^1$ distance from the bid ask spread - i.e. the sum absolute difference between the fitted vols and the bid ask spread - if inside of the bid ask spread the distance is zero. I use a combination of all of these measures to assess my data quality along with some other tricks.
You also have the advantage of looking at tick data - so you can see a time series of these error metrics. A sudden drastic change could be an alert requiring intervention - but at this point handling the details are up to you.
Some additional work may be required for American options since put/call vols are not required to be the same....and if using single stocks, borrow costs can be annoying...but I don't have time to discuss those detail right now.
PS - be sure to discard strikes that are less than 1 or 2 delta from such a fit - maybe even less than 5 delta - depending on what you need. For SPX going all the way to 1 delta options is fine - but going past that, you see a lot of noise on the wings. I do a lot of work cleaning up the wings that I have not discussed here - I generally try to tighten up the bid ask when I get to the wings so that the put/call prices are monotonic - it is a pain in the butt, but keeps things much cleaner in the long run. |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
David E Speyer
Professor of Mathematics at the University of Michigan. My research interests are in combinatorial algebraic geometry, particularly Schubert calculus, matroids and cluster algebras. I also enjoy thinking about number theory and computational mathematics. Disclaimer: I will often ignore messages about old Stack Exchange posts when I am working on other things.
Ann Arbor, MI
Member for 9 years, 11 months
240 profile views
Last seen Sep 11 at 14:33 Communities (22)
MathOverflow
111.1k
111.1k1010 gold badges293293 silver badges564564 bronze badges
Mathematics
47.5k
47.5k55 gold badges135135 silver badges218218 bronze badges
Mathematics Educators
2.5k
2.5k1010 silver badges2222 bronze badges
Academia
1.4k
1.4k77 silver badges1414 bronze badges
Mathematica
1.2k
1.2k1111 silver badges2121 bronze badges View network profile → Top network posts 168 Evaluate $\int_0^1 \frac{\log \left( 1+x^{2+\sqrt{3}}\right)}{1+x}\mathrm dx$ 118 Why do primes dislike dividing the sum of all the preceding primes? 101 How to find the Galois group of a polynomial? 93 Invertible matrices of natural numbers are permutations... why? 86 Advantages of IMO students in Mathematical Research 86 How did Cole factor $2^{67}-1$ in 1903? 85 $\prod_{n=1}^{\infty} n^{\mu(n)}=\frac{1}{4 \pi ^2}$ View more network posts → |
Meaning of the Model Relationships It is often useful to divide all energy into mechanical and internal pieces. Change in total energy of a system is equal to the energy added to the system. Energy added can be separated into heat, Q, and work, W. When changes in energy are restricted to only the internalenergies, conservation of energy reduces to the statement: the change in internal energy is equal to the energy added as heat and work. This statement is referred to as the First Law of Thermodynamics.
Allowing for a transfer of energy into a physical system from outside the system as either heat or work or both, we can express conservation of energy as:
\[ \Delta E = Q + W \]
which actually means
\[ \Delta E_{\text{physical system}} = Q_{in} + W_{in} \]
This equation is always true as long as we interpret
∆E to include changes of energy associated with the all physical and we mean by Q and W the addition of energy as heat or as work system tothe system when Q and W are positive. (A negative Q or W means that energy is removed from the system.)
\[ \Delta E = \Delta E_{\text{mechanical}} + \Delta U = Q + W \]
where ∆U = ∆E
bond + ∆E thermal + ∆E atomic + ∆E nuclear and is called the . E change in internal energy atomicincludes changes in the atomic states of atoms that are not involved in chemical bonds. In normal chemical reactions and physical phase changes, there are no changes in nuclear energies, so the last term is zero.
First Law of Thermodynamics
If there are no changes in E
mechanical, then the expression representing conservation of energy for a particular physical system becomes the familiar relation used by physicists and chemists known as the first law of thermodynamics:
\[ \Delta U = Q + W \]
In chemistry, the first law of thermodynamics at constant pressure is typically written as:
\[ \Delta U_{rxn} = q - P \Delta V \]
The subscript “rxn” signifies that this is a change in energy due to a reaction and the lower case “q” signifies that heat is not a state function, but rather a transfer of energy. The work is expressed explicitly as -P∆V, since only this kind of work is assumed to occur. Rest assured, however, that we are talking about exactly the same thing. You should practice “seeing through or seeing past” the particular symbols to the
represented by an equation. These last two equations for ∆U are perfect examples of this point. Do you look at them and see two different strings of symbols to memorize, or do you immediately recognize them as expressing the same physical content? content
It is sometimes useful to write the first law of thermodynamics in differential form:
\[ dU = dQ + dW \]
(dQ is interpreted here to be a small transfer of energy, rather than a true differential)
4) The entropy, S, of a closed system can never decrease in any process (the closed system could be a combined open system and the surroundings with which it interacts). If the process is reversible, the entropy of the closed system remains constant. If the process is irreversible, the entropy of the closed system increases. These relationships are referred to as the
Second Law of Thermodynamics.
Definition of Temperature
In thermodynamics, the real definition of temperature is different than what one might expect. Rather than an operational definition (a definition based on experiment), such as defining temperature based on the readings of thermometers, the definition comes from a discussion of entropy and heat. \[ T \equiv \frac{dU}{dS} \]
More commonly written as \[ \frac{1}{T} \equiv \frac{dS}{dU} \]
Because dS/dU is the rate at which entropy changes when heat is added, whereas dU/dS implies a strange causal relationship (that changing the entropy changes the thermal energy, when the thing we can directly affect is heat, not entropy).
You might learn about this definition in greater detail in a later course. For now, think of it as a useful relationship that is worth remembering.
5)
6) For processes in which all work is
reversible, the heat added to or removed from the system can be expressed in terms of the change in entropy using the previous relationship (dQ = T dS).
The restriction of the reversible process applies only to
∆S (using ∆S = Q/T or dS = dQ/T) as a system moves from one state to another. The actual change in ∆S of the system is the same, calculating regardless of whether the change is reversible or not.That is the meaning of S being a state function. But to calculate ∆S, we must move along a hypothetical path reversibly. This is the beauty or “magic” of thermodynamics: we can calculate values of changes in state functions for very complicated processes by imagining that we move the system between the two states in a manner that makes it easy to do the calculations. But the answers we get for the changes in the state functions using this approach are the same even when the system doesn’t actually change in this way! Closer Look at Entropy from a thermodynamic viewpoint
The fundamental relationship between entropy and energy is the following. Imagine adding heat to a system reversibly, i.e., no friction occurs and no energy is transferred as heat across a finite temperature difference. Under these conditions there is a change in entropy (entropy is created or destroyed) according to the following relationship:
Now S is a state variable, so \( \Delta S = S-{f} - S_{i} \)
For constant T, \( T \Delta S = Q \)
or \( \Delta S = \frac{Q}{T} \) (reversible process, constant T)
If Q is positive (heat is added to the system) ∆S is positive. On the other hand, if heat is removed, the entropy is reduced. We also see from the above relationship that the SI units of entropy are joules per kelvin.
Calculating ∆S
Since ∆S is a state variable and depends only on the initial and final state, we should be able to directly express it as a function of other state variables. We can use the first law of thermodynamics to do this.
\[ dQ = dU - dW = dU + PdV \]
Substituting this into the our expression for entropy, dS = dQ / T, we get:
\[ dS = \frac{dU}{T} + P \frac{dV}{T} \]
If the temperature is constant, then we can also write
\[ \Delta S = \frac {\Delta U}{T} + P \frac{\Delta V}{T} \]
Experimentally, the most direct route to the determination of values of ∆S is through heat capacity data. In general, C = dQ/dT, so
\[ dS = \frac{dQ}{T} = C \frac{dT}{T} \]
and \[ \Delta S = \int_{T_{i}}^{T_{f}} \frac{C(T)}{T} dT \]
Either Cp or Cv can be used in the above relation. If Cv is used, the two state variables describing the system are T and V (V constant) and if Cp is used, the two state variables describing the system are T and P (P constant).
If C(T) is approximately constant over the temperature range in question, it can be pulled out of the integral and we have the useful result
\[ \Delta S = C \cdot ln( \frac{T_{f}}{T_{i}}) \]
7) New energy state functions can be formed from combinations of other state functions. Two common examples are enthalpy, H, and Gibbs energy, G. (H = U + PV; G = H – TS)
8) At
constant pressure, the energy added to a system can be expressed as a change in enthalpy (Q = ∆H).
This is a very useful relationship! This is because so much experimental work is done in open containers at one atmosphere of pressure. This is worth looking at more closely to see how it comes about.
Enthalpy
The
of a physical system is a function of state variables. It is defined as enthalpy
\[ H = U + PV \]
Note that the three variables that define H are all themselves state variables
We note for future reference that P is an energy density, which, when multiplied by volume, is an energy. Why would a function equal to the internal energy plus the product of the two state variables pressure and volume be useful? As with energy, it is the
changes in H that occur during a process that are important. Changes in enthalpy are most useful under certain conditions: specifically, situations in which . Chemists and biologists conduct many of their experiments (any reaction carried out in a container open to the atmosphere) under constant pressure. pressure is held constant
The following derivation shows why enthalpy is a convenient variable when considering changes occurring under constant pressure:
\[ \Delta H = \Delta U + P \Delta V \]
or \[ dH = dU + P dV \]
For ∆U we can substitute the expression for internal energy we obtained from the first law of thermodynamics (at constant pressure): ∆U = Q - P∆V. This gives us
\[ \Delta H = Q - P \Delta V + P \Delta V \]
and
∆H = Q or dH = dQ (constant pressure process and only PV work)
So at constant pressure, the enthalpy change during a reaction is simply equal to the heat entering the system. Likewise, if a phase change is allowed to occur at constant pressure, the heats of melting and vaporization are simply equal to the
changes in enthalpy. So now we see why heats of fusion and vaporization are often listed as changes in enthalpies. When you measure the energies transferred as heat during a phase change at constant pressure, you are directly measuring the change in the state function H. A negative ∆H means heat is transferred out, and the reaction is exothermic. A positive ∆H means heat is transferred in and the reaction is endothermic.
∆H < 0 exothermic
∆H > 0 endothermic
It is because dH = dQ (constant pressure and only PV work) for the conditions so common in chemistry and biochemistry, and technology in general, that enthalpy is so useful. Most measurements are carried out at constant pressure, and for many systems, the only work involved comes from compressing or expanding a gas.
9) At
constant pressure and temperature, the change in the Gibbs energy expresses a system’s ability to satisfy both the conservation of energy and the tendency of entropy increase, i.e., satisfaction of both the 1 st and 2 nd law of thermodynamics. ∆G = ∆H – T ∆S. ∆G represents the amount of energy available to do useful work in a reaction or process.
10) The direction in which a reaction or process proceeds is determined by the sign of ∆G for the process:
If ∆G < 0, the reaction is spontaneous
If ∆G > 0, the reaction is not spontaneous
Note: A full discussion of Gibbs Energy is included in the discussion of the Intro Statistical Model of Thermodynamics at the end of this chapter:
11) The
completely values of a small number of State Functions define the These values are stateof a thermodynamic system. that took the system to that state. independentof the path
12) The amounts of
heat and work involved in going from State A to State B will depend on the particular path taken.
13) Values of state functions at state B can be readily calculated, if they are known at state A, because a reversible path can be used for the calculation to get from A to B, even if the actual path the physical system takes, is irreversible.
Comments on 12. and 13. More on Heat Capacities
Under the assumption that there are no bond energy changes and the only energy systems are the thermal and bond systems, the first law of thermodynamics reduces to
\[ dU = dQ + dW \]
\[ dE_{th} = dQ + dW \text{, since} E_{bond} = 0 \]
Solving for dQ and differentiating with respect to T we obtain a general expression for the heat capacity in terms of our model:
\[ C = \frac{dQ}{dT} = \frac{dE_{th}}{dT} - \frac{dW}{dT} \]
Previously, we wanted the constant volume heat capacity. In that case, the dW/dT term was zero, so we had for constant volume:
For constant pressure, the work term is not zero, so we need to evaluate it for the particular substance we are concerned about.
The first step is to write dW as -PdV, since we are concerned with fluids (liquids and gases). The expression for Cp then becomes
\[ C_{p} = \frac{dQ}{dT} = \frac{dE_{th}}{dT} + P \frac{dV}{dT} \] or
\[ C_{p} = C_{v} + P \frac{dV}{dT} \]
To go any further we need to evaluate dV/dT, which will depend on the substance. We will evaluate this within the Ideal Gas Model shortly. Algebraic Representations Relationships 1-3
\( \Delta E = \Delta E_{\text{mechanical}} + \Delta U \)
\( \Delta E_{\text{physical system}} = Q_{\text{into physical system}} + W_{\text{done on physical system}} \)
or simply \[ \Delta E = Q + W \]
\( \Delta U = \Delta E_{bond} + \Delta E_{thermal} + \Delta E_{atomic} + \Delta E_{nuclear} \)
The 1
st Law of Thermodynamics
∆U = Q + W in differential form: dU = dQ + dW and for simple fluids: dU = dQ – PdV
and for reversible fluid systems or along reversible paths: dU = TdS – PdV
Because dU = TdS – PdV involves only functions of state (Q’s and W’s have been replaced by state functions), it is true for all processes, whether they are reversible or not.
Expression for work done on a fluid
\[ W = - \int_{V_{i}}^{V_{f}} P(V) dV \text{ or in differential form:} dW = -P(V) dV \]
Relations 4-6
The Second Law of Thermodynamics:
In a closed system, \( \Delta S \geq 0 \)
\[ dS = \frac{dQ}{T} \] in a reversible process
Extension of the concept of heat capacity:
Heat Capacity
\(C = dQ/dT \)
Heat Capacity at Constant Volume and Constant Pressure (with ∆E
Bond = 0)
\( C_{v} = dE_{th}/dT C_{p} = C_{v} + P \cdot dV/dT \)
Relations 7-10
Enthalpy
\( H = U + PV\) , At constant pressure, \( \Delta H = Q \)
Gibbs Energy
\( G = H - TS\) , At constant pressure and temperature, \( \Delta G = \Delta H – T \Delta S \)
If \( \Delta G < 0 \), the process is spontaneous
If \( \Delta G > 0 \), the process is not spontaneous
If \( \Delta G = 0 \) , the forward and reverse directions are in equilibrium
Equations of Sate:
Ideal Gas Law: \( PV = nRT\) or \( PV = N k_{B} T \) |
1) Let us work in units where the speed-of-light $c=1$ is one.
In Ref. 1 is derived the radial geodesic equation for a particle in the equatorial plane
$$\tag{7.47} (\frac{dr}{d\lambda})^2+2V(r)~=~E^2, $$
with potential
$$ \tag{7.48} 2V(r)~:=~(1-\frac{r_s}{r})((\frac{L}{r})^2+\epsilon). $$
Here $\epsilon=0$ for a massless particle and $\epsilon=1$ for a massive particle. The energy $E$ and angular momentum $L$ are constants of motion (which reflect Killing-symmetries of the Schwarzschild metric); $\lambda$ is the affine parameter of the geodesic; and $r_s\equiv\frac{2GM}{c^2}$ is the Schwarzschild-radius. (More precisely, in the massive case $\epsilon=1$, the quantities $E$ and $L$ are specific quantities, i.e. quantities per unit rest mass; and $\lambda$ is proper time.)
2) By differentiating eq. (7.47) wrt. $\lambda$, we find that the condition for a circular orbit
$$r(\lambda)~\equiv~ r_{*} \qquad\Rightarrow\qquad \frac{dr}{d\lambda}~\equiv~0$$
is
$$\tag{1}V'(r_{*})~=~0\qquad\Leftrightarrow\qquad \frac{2r_{*}}{r_s}~=~3+\epsilon(\frac{r_{*}}{L})^2.$$
3) Let us next investigate an incoming particle, which has non-constant radial coordinate $\lambda\mapsto r(\lambda)$, and that is precisely on the critical border between being captured and not being captured by the black hole. It would have a radial turning point $\frac{dr}{d\lambda}=0$ precisely at the radius $r=r_{*}$, so that
$$\tag{2} 2V(r_{*})~=~E^2\qquad\Leftrightarrow\qquad (1-\frac{r_s}{r_{*}})((\frac{L}{r_{*}})^2+\epsilon)~=~E^2.$$
4)
The massless case $\epsilon=0$. Eq. (1) yields
$$\tag{3}r_{*}~=~\frac{3}{2}r_s.$$
Plugging eq. (3) into eq. (2) then yields the ratio
$$\tag{4} \frac{L}{E}~=~\frac{3}{2}\sqrt{3}r_s. $$
We next use that $L$ and $E$ are constants of motion, so that we can easily identify them at spacial infinity $r=\infty$, where special relativistic formulas apply. The critical impact parameter $b$ is precisely this ratio
$$\tag{5} b~=~\frac{L}{p}~=~\frac{L}{E}~\stackrel{(4)}{=}~\underline{\underline{\frac{3}{2}\sqrt{3}r_s}}. $$
5)
The non-relativistic case $v_{\infty}\ll 1$. The specific energy $E\approx 1$ consists mostly of rest energy. Solving eqs. (1) and (2) then leads to a unique solution
$\tag{6}r_{*}~\approx~ 2r_s~\approx~ L.$
The critical impact parameter $b$ becomes
$$\tag{5} b~=~\frac{L}{v_{\infty}}~\approx~\underline{\underline{2r_s\frac{c}{v_{\infty}}}}, $$
cf. Ref. 2. The cross section is $\sigma=\pi b^2$.
References:
S. Carroll,
Lecture Notes on General Relativity, Chapter 7, p.172-179. The pdf file is available from his website.
V.P. Frolov and I.D. Novikov,
Black Hole Physics: Basic Concepts and New Developments, p.48. |
Difference between revisions of "Inaccessible"
m (Linked to Indescribables)
m (→Hyper-inaccessible and more: by)
(7 intermediate revisions by 2 users not shown) Line 3: Line 3:
Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy, although weaker notions such as the [[worldly]] cardinals can still be viewed as large cardinals.
Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy, although weaker notions such as the [[worldly]] cardinals can still be viewed as large cardinals.
−
A cardinal $\kappa$ is ''inaccessible'', also
+
A cardinal $\kappa$ is ''inaccessible'', also called ''strongly inaccessible'', if it is an [[uncountable]] [[regular]] [[strong limit]] cardinal.
− +
$\kappa$ inaccessible
−
*
+
$V_\kappa$ is a model of ZFC and so inaccessible cardinals are [[worldly]]
−
* (
+
, .
− +
* $\kappa$ is an [[aleph fixed point]] and a [[beth fixed point]], and consequently $V_\kappa=H_\kappa$.
− +
* ()there is an inner model of a forcing extension satisfying ZF+DC in which every set of reals is Lebesgue measurablein , is of an inaccessible cardinal.
−
*
+
* , the set of $\kappa$ $\kappa$ is [[]] $\$.
− + − + − + + + + + + + + + + + +
==Weakly inaccessible cardinal==
==Weakly inaccessible cardinal==
−
A cardinal $\kappa$ is ''weakly inaccessible'' if it is an [[uncountable]] [[regular]] [[limit]]
+
A cardinal $\kappa$ is ''weakly inaccessible'' if it is an [[uncountable]] [[regular]] [[limit ]]. Under [[GCH]], this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent.
+ + + + + +
==Levy collapse==
==Levy collapse==
Line 41: Line 55:
== Degrees of inaccessibility ==
== Degrees of inaccessibility ==
−
A cardinal $\kappa$ is ''$1$-inaccessible'' if it is inaccessible and a limit of inaccessible cardinals. In other words, $\kappa$ is $1$-inaccessible if $\kappa$ is the $\kappa^{\rm th}$ inaccessible cardinal, that is, if $\kappa$ is a fixed point in the enumeration of all inaccessible cardinals. Equivalently, $\kappa$ is $1$-inaccessible if $V_\kappa$ is a universe and satisfies the universe axiom.
+
A cardinal $\kappa$ is ''$1$-inaccessible'' if it is inaccessible and a limit of inaccessible cardinals. In other words, $\kappa$ is $1$-inaccessible if $\kappa$ is the $\kappa^{\rm th}$ inaccessible cardinal, that is, if $\kappa$ is a fixed point in the enumeration of all inaccessible cardinals. Equivalently, $\kappa$ is $1$-inaccessible if $V_\kappa$ is a universe and satisfies the universe axiom.
More generally, $\kappa$ is $\alpha$-inaccessible if it is inaccessible and for every $\beta\lt\alpha$ it is a limit of $\beta$-inaccessible cardinals.
More generally, $\kappa$ is $\alpha$-inaccessible if it is inaccessible and for every $\beta\lt\alpha$ it is a limit of $\beta$-inaccessible cardinals.
−
==Hyper-inaccessible==
+ + + + + + +
==Hyper-inaccessible ==
A cardinal $\kappa$ is ''hyperinaccessible'' if it is $\kappa$-inaccessible. One may similarly define that $\kappa$ is $\alpha$-hyperinaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$, it is a limit of $\beta$-hyperinaccessible cardinals. Continuing, $\kappa$ is ''hyperhyperinaccessible'' if $\kappa$ is $\kappa$-hyperinaccessible.
A cardinal $\kappa$ is ''hyperinaccessible'' if it is $\kappa$-inaccessible. One may similarly define that $\kappa$ is $\alpha$-hyperinaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$, it is a limit of $\beta$-hyperinaccessible cardinals. Continuing, $\kappa$ is ''hyperhyperinaccessible'' if $\kappa$ is $\kappa$-hyperinaccessible.
Line 51: Line 71:
More generally, $\kappa$ is ''hyper${}^\alpha$-inaccessible'' if it is hyperinaccessible and for every $\beta\lt\alpha$ it is $\kappa$-hyper${}^\beta$-inaccessible, where $\kappa$ is ''$\alpha$-hyper${}^\beta$-inaccessible'' if it is hyper${}^\beta$-inaccessible and for every $\gamma<\alpha$, it is a limit of $\gamma$-hyper${}^\beta$-inaccessible cardinals.
More generally, $\kappa$ is ''hyper${}^\alpha$-inaccessible'' if it is hyperinaccessible and for every $\beta\lt\alpha$ it is $\kappa$-hyper${}^\beta$-inaccessible, where $\kappa$ is ''$\alpha$-hyper${}^\beta$-inaccessible'' if it is hyper${}^\beta$-inaccessible and for every $\gamma<\alpha$, it is a limit of $\gamma$-hyper${}^\beta$-inaccessible cardinals.
−
Every [[Mahlo]] cardinal $\kappa$ is
+
Every [[Mahlo]] cardinal $\kappa$ is $\$-inaccessible .
+ + Latest revision as of 09:13, 4 May 2019 Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy, although weaker notions such as the worldly cardinals can still be viewed as large cardinals.
A cardinal $\kappa$ being inaccessible implies the following:
$V_\kappa$ is a model of ZFC and so inaccessible cardinals are worldly. The worldly cardinals are unbounded in $\kappa$, so $V_\kappa$ satisfies the existence of a proper class of worldly cardinals. $\kappa$ is an aleph fixed point and a beth fixed point, and consequently $V_\kappa=H_\kappa$. (Solovay)there is an inner model of a forcing extension satisfying ZF+DC in which every set of reals is Lebesgue measurable; in fact, this is equiconsistent to the existence of an inaccessible cardinal. For any $A\subseteq V_\kappa$, the set of all $\alpha<\kappa$ such that $\langle V_\alpha;\in,A\cap V_\alpha\rangle\prec\langle V_\kappa;\in,A\rangle$ is club in $\kappa$.
An ordinal $\alpha$ being inaccessible is equivalent to the following:
$V_{\alpha+1}$ satisfies $\mathrm{KM}$. $\alpha>\omega$ and $V_\alpha$ is a Grothendiek universe. $\alpha$ is $\Pi_0^1$-Indescribable. $\alpha$ is $\Sigma_1^1$-Indescribable. $\alpha$ is $\Pi_2^0$-Indescribable. $\alpha$ is $0$-Indescribable. $\alpha$ is a nonzero limit ordinal and $\beth_\alpha=R_\alpha$ where $R_\beta$ is the $\beta$-th regular cardinal, i.e. the least regular $\gamma$ such that $\{\kappa\in\gamma:\mathrm{cf}(\kappa)=\kappa\}$ has order-type $\beta$. $\alpha = \beth_{R_\alpha}$. $\alpha = R_{\beth_\alpha}$. $\alpha$ is a weakly inaccessible strong limit cardinal (see weakly inaccessible below). Contents Weakly inaccessible cardinal
A cardinal $\kappa$ is
weakly inaccessible if it is an uncountable regular limit cardinal. Under GCH, this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent.
There are a few equivalent definitions of weakly inaccessible cardinals. In particular:
Letting $R$ be the transfinite enumeration of regular cardinals, a limit ordinal $\alpha$ is weakly inaccessible if and only if $R_\alpha=\aleph_\alpha$ A nonzero cardinal $\kappa$ is weakly inaccessible if and only if $\kappa$ is regular and there are $\kappa$-many regular cardinals below $\kappa$; that is, $\kappa=R_\kappa$. A regular cardinal $\kappa$ is weakly inaccessible if and only if $\mathrm{REG}$ is unbounded in $\kappa$ (showing the correlation between weakly Mahlo cardinals and weakly inaccessible cardinals, as stationary in $\kappa$ is replaced with unbounded in $\kappa$) Levy collapse
The Levy collapse of an inaccessible cardinal $\kappa$ is the $\lt\kappa$-support product of $\text{Coll}(\omega,\gamma)$ for all $\gamma\lt\kappa$. This forcing collapses all cardinals below $\kappa$ to $\omega$, but since it is $\kappa$-c.c., it preserves $\kappa$ itself, and hence ensures $\kappa=\omega_1$ in the forcing extension.
Inaccessible to reals
A cardinal $\kappa$ is
inaccessible to reals if it is inaccessible in $L[x]$ for every real $x$. For example, after the Levy collapse of an inaccessible cardinal $\kappa$, which forces $\kappa=\omega_1$ in the extension, the cardinal $\kappa$ is of course no longer inaccessible, but it remains inaccessible to reals. Universes
When $\kappa$ is inaccessible, then $V_\kappa$ provides a highly natural transitive model of set theory, a universe in which one can view a large part of classical mathematics as taking place. In what appears to be an instance of convergent evolution, the same universe concept arose in category theory out of the desire to provide a hierarchy of notions of smallness, so that one may form such categories as the category of all small groups, or small rings or small categories, without running into the difficulties of Russell's paradox. Namely, a
Grothendieck universe is a transitive set $W$ that is closed under pairing, power set and unions. That is, (transitivity) If $b\in a\in W$, then $b\in W$. (pairing) If $a,b\in W$, then $\{a,b\}\in W$. (power set) If $a\in W$, then $P(a)\in W$. (union) If $a\in W$, then $\cup a\in W$.
The
Grothendieck universe axiom is the assertion that every set is an element of a Grothendieck universe. This is equivalent to the assertion that the inaccessible cardinals form a proper class. Degrees of inaccessibility
A cardinal $\kappa$ is
$1$-inaccessible if it is inaccessible and a limit of inaccessible cardinals. In other words, $\kappa$ is $1$-inaccessible if $\kappa$ is the $\kappa^{\rm th}$ inaccessible cardinal, that is, if $\kappa$ is a fixed point in the enumeration of all inaccessible cardinals. Equivalently, $\kappa$ is $1$-inaccessible if $V_\kappa$ is a universe and satisfies the universe axiom.
More generally, $\kappa$ is $\alpha$-inaccessible if it is inaccessible and for every $\beta\lt\alpha$ it is a limit of $\beta$-inaccessible cardinals.
$1$-inaccessibility is already consistency-wise stronger than the existence of a proper class of inaccessible cardinals, and $2$-inaccessibility is stronger than the existence of a proper class of $1$-inaccessible cardinals. More specifically, a cardinal $\kappa$ is $\alpha$-inaccessible if and only if for every $\beta<\alpha$: $$V_{\kappa+1}\models\mathrm{KM}+\text{There is a proper class of }\beta\text{-inaccessible cardinals}$$
As a result, if $\kappa$ is $\alpha$-inaccessible then for every $\beta<\alpha$: $$V_\kappa\models\mathrm{ZFC}+\text{There exists a }\beta\text{-inaccessible cardinal}$$
Therefore $2$-inaccessibility is weaker than $3$-inaccessibility, which is weaker than $4$-inaccessibility... all of which are weaker than $\omega$-inaccessibility, which is weaker than $\omega+1$-inaccessibility, which is weaker than $\omega+2$-inaccessibility...... all of which are weaker than hyperinaccessibility, etc.
Hyper-inaccessible and more
A cardinal $\kappa$ is
hyperinaccessible if it is $\kappa$-inaccessible. One may similarly define that $\kappa$ is $\alpha$-hyperinaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$, it is a limit of $\beta$-hyperinaccessible cardinals. Continuing, $\kappa$ is hyperhyperinaccessible if $\kappa$ is $\kappa$-hyperinaccessible.
More generally, $\kappa$ is
hyper${}^\alpha$-inaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$ it is $\kappa$-hyper${}^\beta$-inaccessible, where $\kappa$ is $\alpha$-hyper${}^\beta$-inaccessible if it is hyper${}^\beta$-inaccessible and for every $\gamma<\alpha$, it is a limit of $\gamma$-hyper${}^\beta$-inaccessible cardinals.
Meta-ordinal terms are terms like $Ω^α · β + Ω^γ · δ +· · ·+Ω^\epsilon · \zeta + \theta$ where $α, β...$ are ordinals. They are ordered as if $Ω$ were an ordinal greater then all the others. $(Ω · α + β)$-inaccessible denotes $β$-hyper${}^α$-inaccessible, $Ω^2$-inaccessible denotes hyper${}^\kappa$-inaccessible $\kappa$ etc. Every Mahlo cardinal $\kappa$ is $\Omega^α$-inaccessible for all $α<\kappa$ and probably more. Similar hierarchy exists for Mahlo cardinals below weakly compact. All such properties can be killed softly by forcing to make them any weaker properties from this family.[1]
ReferencesMain library |
Mathematica makes it very easy to to plot the contour lines for a function of two real variables using
ContourPlot. It also makes it very easy to plot the streamlines of vector fields using
Stream Plot. However, I need to plot "contour lines" which can not come from either method.
Simply put, a "ridge system" is like a vector field, but instead of associating a vector based at every point on the plane, we associate a "line," or "direction," or "unoriented vector." Ridge lines kind of look like stream line plots, but where the stream lines are not "directed." In fact, there is in general no way to "direct" the ridge lines in a consistent manner, so they can't come from the stream lines of any vector field. Three examples are below.
Now let me describe what I would like to do:
I would like to start with a complex function $f(z)$ (where $z, f(z) \in \mathbb{C}$). Say that $f(z) = r e^{i \theta}$. Then I would like to draw the ridge line at $z$ corresponding to the directions $\pm e^{- i \theta/2}$.
Bill Thurston seems to have done this in his answer here, picture below (but he doesn't say how). Ignore the red lines and just look at the blue lines:
My goal is to be able to take a complex function $f(z)$ and make an image similar to the one above. What functions in Mathematica can I use to do this? |
Authors
Mats Granath
Johan Schött
Published in Physical Review B. Condensed Matter and Materials Physics Volume 90 Pages 235129 ISSN 1098-0121 Publication year 2014 Published at
Department of Physics (GU)
Pages 235129 Language en Links
dx.doi.org/10.1103/PhysRevB.90.2351...
Subject categories Condensed Matter Physics
We study the Mott insulating state of the half-filled paramagnetic Hubbard model within dynamical mean field theory using a recently formulated stochastic and non-perturbative quantum impurity solver. The method is based on calculating the impurity self energy as a sample average over a representative distribution of impurity models solved by exact diagonalization. Due to the natural parallelization of the method, millions of poles are readily generated for the self energy which allows to work with very small pole-broadening $\eta$. Solutions at small and large $\eta$ are qualitatively different; solutions at large $\eta$ show featureless Hubbard bands whereas solutions at small $\eta\leq 0.001$ (in units of half bare band width) show a band of electronic quasiparticles with very small quasiparticle weight at the inner edge of the Hubbard bands. The validity of the results are supported by agreement within statistical error $\sigma_{\text{QMC}}\sim 10^{-4}$ on the imaginary frequency axis with calculations using a continuous time quantum Monte Carlo solver. Nevertheless, convergence with respect to finite size of the stochastic exact diagonalization solver remains to be rigourously established. |
The process of solving rational inequalities is close to the method of solving polynomial inequality.
In solving the rational inequality ,special attention needs to be given to the denominator.
Points to Remember:
Never multiply both sides by denominator
Always bring all the terms to one side and make another side zero.
Do not ignore the zeros of denominator.
If the inequality sign has = included like $\leq$ or $\geq$, then check the intervals carefully. The zeros of the denominators will always be excluded from the solution as making denominator zero makes the expression undefined.
Example: Solve
$\frac{2x-1}{x+1}$ $> 0$
The common practice is to multiply both sides by the denominator $x + 1$.
So,
$(2x - 1) > 0$
Solving for $x$, we get
$2x - 1 > 0$
$x >$ $\frac{1}{2}$
Let us check the inequality for region
$x$ = $3$, $\frac{2x-1}{x+1}$ = $\frac{6-1}{3+1}$ $> 0$ True
Check if $x$ = $-2$.
$\frac{2x + 1}{x + 1}$ = $\frac{-4 + 1}{-2 + 1}$ = $\frac{-3}{-1}$ = $2 > 0$
Thus as per the solution obtained the solution has to be $x > 2$ but the value $x$ = $-2$ does not belong to the interval $(2, \infty)$ which implies that the method of solving rational inequality is incorrect .
The main concept forgotten here is that if an inequality is multiplied of divided by the negative number, its inequality sign reverses. Since $x + 1$ has a variable in it which can take any value, $x + 1$ can be negative as well.
Considering $x + 1$ as negative , the inequality $\frac{2x-1}{x+1}$ $> 0$ becomes $\frac{2x - 1}{x + 1}$ $\times\ (x + 1) < 0\ \times\ (x + 1)$
$2x - 1 < 0$
$x <$ $\frac{1}{2}$
$x + 1$ = $0$ will make the expression undefined so $x$ = $-1$ becomes restricted value.
Proper Step by Step method to solve Rational Inequality:
1) Make the right side of the inequality zero.
Example: $\frac{x-3}{x+3}$ - 3 < $\frac{2}{x}$
Subtract $\frac{2}{x}$ from both sides.
$\frac{x-3}{x+3}$ -3 - $\frac{2}{x}$ < $\frac{2}{x}$ - $\frac{2}{x}$
$\frac{x-3}{x+3}$ - 3 - $\frac{2}{x}$ < 0
Simplify the left side.
The LCD for $x$ and $x + 3$ will be $x(x + 3)$.
$\frac{x-3}{x+3}$ $\frac{x}{x}$ - 3 $\frac{x \times (x+3)}{x \times (x+3)}$ - $\frac{2}{x}$ $\times$ $\frac{x+3}{x+3}$ < 0
$\frac{x^2-3x}{x \times (x+3)}$ - $\frac{3x^2-9x}{x \times (x+3)}$ - $\frac{2x-6}{x \times (x+3)}$ < 0
$\frac{x^2-3x-3x^2+9x-2x+6}{x \times (x+3)}$ < 0
$\frac{-2x^2+4x+6}{x \times (x+3)}$ < 0
Find zeros of numerator and the denominator separately.
$-2x^2 + 4x + 6$ = $0$ and $x(x + 3)$ = $0$
To have $\frac{-2x^2+4x+6}{x \times (x+3)}$ <0 either of the numerator or the denominator should be negative.
Its better to use the number line sign chart to check for the inequality interval.
$-2x^2 + 4x + 6$ = $0$ $x(x + 3)$ = $0$
$x^2 - 2 - 3$ = $0$ $x$ = $0$ or $x + 3$ = $0$
$(x - 3)(x + 1)$ = $0$ $x$ = $0$ or $x$ = $-3$
$x - 3$ = $0$ or $x + 1$ = $0$
So, finally we get
$x$ = $3,\ x$ = $-1,\ x$ = $0$ and $x$ = $-3$.
Mark all these points on the number line and then check the sign of the expression in the given region.
Check $(-\infty,\ -3),\ x$ = $-4$
$\frac{-2(x-3)(x+1)}{x \times (x+3)}$ = $\frac{-2(-4-3)(-4+1)}{-4 \times (-4+3)}$ = $\frac{-ve}{+ve}$
So, the expression is negative.
Next region is $(-3,\ -1)$ Take a value in this interval,say $x$ = $-2$
The expression will be $-2 \times -5 \times$ $-\frac{1}{-2}$ $\times 1$: $\frac{-ve}{+ve}$: $+ve$ so $> 0$
Similarly check for all the intervals.
Since we need $\frac{-2x^2+4x+6}{x \times (x+3)}$ < 0
The intervals that have negative sign will be the solution.
The solution will be $(-\infty,\ -3) (-1,\ 0)$ and $(3,\ 0)$ |
Let $\sum a_n x^n$ be a power series whose radius of convergence is $0<R<\infty$.
What is the radius of convergence, $R'$ of $\sum \frac {n^n}{n!} a_n x^n$?
If I know $\lim_{n\rightarrow\infty} \frac {a_{n+1}}{a_n}$ exists. this would be an easy problem, since (after simplification):
$\frac1{R'}=\lim_{n\rightarrow\infty}(\frac{1+n}n)^n\cdot\frac{a_{n+1}}{a_n}=\frac e R$
But what bothers me is, how can I be sure $\lim \frac{a_{n+1}}{a_n}$ exists in the first place?
edit: $\limsup$ is not enough. For example, in the series $ f_n(x) = \begin{cases} \frac xn, & \text{if $n$ is even} \\ \frac {2x}n, & \text{if $n$ is odd} \end{cases}$
The radius of convergence is 1 but $\limsup \frac {a_{n+1}}{a_n}=2$. |
We considered the pentagonal numbers which begin :
\(1, 5, 12, 22, 35, 51, 70, 92, 117, 145, …\)
That is :
\(\displaystyle P_n = \frac {n(3n-1)}{2}\) for \(n = 1, 2, 3, … \)
Now consider the following sequence :
\(2, 5, 8, 11, 14, 17, 20, 23, 26, 29, …\)
which begins at \(2\) and has a common difference of \(3\) between the terms.
For any arithmetic sequence, we have :
\(a_n = a_1 + d(n-1), \, n = 1, 2, 3, … \)
So :
\( a_n = 2 + 3(n-1) = 2 + 3n-3 = 3n-1, \, n = 1, 2, 3, …\)
Now we can sum this using standard results :
\(\displaystyle \begin {align}
\sum_{k=1}^n 3k-1 = \sum_{k=1}^n 3k – \sum_{k=1}^n 1\\ & = \frac {3n(n +1)}{2} – n\\ & = \frac {3n^2 +3n}{2} – \frac {2n}{2}\\ & = \frac {3n^2 + n}{2}\\ & = \frac {n(3n + 1)}{2} \end {align}\)
This gives us the sequence :
\(2, 7, 15, 26, 40, 57, 77, 100, 126, 155, …\)
Let us call this sequence \(Q_n\) :
\(\displaystyle Q_n = \frac {n(3n + 1)}{2}\) for \(n = 1, 2, 3, …\)
What happens when we consider the negative integers?
If,
\(\displaystyle P_n = \frac {n(3n-1)}{2}\) for \(n = -1, -2, -3, …\)
Then,
\(P_n = 2, 7, 15, 26, 40, 57, 77, 100, 126, 155, …\)
If,
\(\displaystyle Q_n = \frac {n(3n + 1)}{2} \) for \(n = -1, -2, -3, …\)
Then,
\(Q_n = 1, 5, 12, 22, 35, 51, 70, 92, 117, 145, …\)
The two sequences can be interlaced together using the formula for \(P_n\) with \(n\) taking all values in the order \(0, 1, -1, 2, -2, 3, -3, …\) which produces the following sequence :
\(0, 1, 2, 5, 7, 12, 15, 22, 26, 35, …\)
This sequence gives the generalised pentagonal numbers, \(G_n\).
The formulas for \(P_n\) and \(Q_n\) and the sequence \(G_n\) appear in Euler’s Pentagonal Number Theorem.
Note that the sequence \(Q_n\) is the number of internal dots in a pentagonal number diagram.
© OldTrout \(2019\)
No Audio file – Does not translate well |
I have to show that:
\begin{equation} P_{t,T}(K)=e^{-r(T-t)} \int_0^{\infty}\left(K-S\right)^+ q_T^S(S)dS \end{equation}
is equivalent to: \begin{equation} P_{t,T}(K)=e^{-r(T-t)}\int_{-\infty}^{K}\left(\int_{-\infty}^y q_T^S(z)dz\right)dy \end{equation}
Breeden and Litzenberger have shown that using Leibniz integration rule and differentiating the first equation twice leads to: \begin{equation} q_T^S(K)=e^{rf(T-t)}\frac{\partial^2P_{t,T}(K)}{\partial K^2}\vert_{K=S_T} \end{equation}
However, I have difficulties to directly go from the first to the second equation in an elegant way. Does anyone have an idea how this can be achieved?
Many thanks for the help! |
Question:
Pat made the substitution {eq}x - 4 = 6 \sin(t){/eq} in an integral and integrated to obtain {eq}\int f(x) dx = 12 t - 2 \sin(t) \cos(t) + C{/eq}.
Complete Pat's integration by doing the back substitution to find the integral as a function of {eq}x{/eq}
Integration by substitution
When we make an integration using the substitution method (or variable change), we must at last make also the back substitution. That is equivalent to find the inverse function of the original function used for the variable change. For example, if we use the substitution {eq}y=f(x) {/eq}, then we obtain a solution in function of {eq}y {/eq}. We have to find {eq}f^{-1} {/eq} and make the back substitution {eq}x=f^{-1}(y) {/eq} to express the solution as a function of {eq}x {/eq}
Answer and Explanation:
For completing Pam's work we must make the inverse function for the back substitution. That is:
{eq}\begin{align} \sin(t)=&\frac{x-4}{6}\\ t&=\arcsin{\frac{x-4}{6}} \end{align} {/eq}
And then we have that:
{eq}\begin{align} \int f(x)dx&=12t-2 \sin(t) \cos(t) + C\\ \int f(x)dx&=12\arcsin(\frac{x-4}{6})-2 \frac{x-4}{6} \sqrt{1-\left(\frac{x-4}{6}\right)^2} + C\\ \end{align} {/eq}
and finally:
{eq}\begin{equation} \int f(x)dx=12 \arcsin\left(\frac{x-4}{6}\right)-\frac{1}{18} (x-4) \sqrt{-x^2+8 x+20} \end{equation} {/eq}
Become a member and unlock all Study Answers
Try it risk-free for 30 daysTry it risk-free
Ask a question
Our experts can answer your tough homework and study questions.Ask a question Ask a question
Search Answers Learn more about this topic:
from Math 104: CalculusChapter 13 / Lesson 5 |
I'm not sure about your calculation but Matlab yields the correct result. In this problem, the common approach is to use
Routh-Hurwitz criterion and search for a row of zeros that yields the possibility for imaginary axis roots. For convert the system to the closed-loop transfer function, hence $$\frac{K}{s^4 + 10s^3 + 88s^2 + 256s + K}$$
The Routh table is
$$\begin{matrix}s^4 &&&& 1 &&&& 88 &&&& K \\s^3 &&&& 10 &&&& 256 \\s^2 &&&& 62.4 &&&& K \\s^1 &&&& \frac{15974.4-10K}{62.4} \\s^0 &&&& K \end{matrix}$$
The \$ s^1 \$ row is the only row that can yield a row of zeros. From the preceding row, we obtain
$$\begin{align}&15974.4 - 10K = 0 \\K &= \frac{15974.4}{10} = 1597.44\end{align}$$
Now we take a look at the row above \$s^1\$ and construct the following polynomial, hence
$$\begin{align}62.4 s^2 + K &= 0 \\62.4 s^2 + 1597.44 &= 0 \\ s^2 &= \frac{-1597.44}{62.4} \\s_{1,2} &= \pm j \sqrt{25.6} \\s_{1,2} &= \pm j 5.0596 \\\end{align}$$
The root locus crosses the imaginary axis at \$\pm j5.0596\$ at the gain \$K=1597.44\$. Consequently, the gain \$K\$ must be less than 1597.44 for the system to be stable. |
Spring 2018, Math 171 Week 10 Poisson Process Conditioning
Let \(N(t)\) be a Poisson process with rate \(\lambda\). Fix \(0 \le n\), \(s \le t\).
Compute \(\mathbb{P}(N(s)=m \mid N(t) = n)\) for \(m \le n\). Give the name and parameters of the distribution.
(Answer) Binomial(\(n,\frac{s}{t}\))
Compute \(\mathbb{E}[N(s) \mid N(t) = n]\)
(Answer) \(n\frac{s}{t}\)
Let \(N(t)\) be a Poisson process with rate \(\lambda\). Fix \(0 \le m \le n\), \(r \le s \le t\).
Compute \(\mathbb{P}(N(s)-N(r)=k \mid N(t) = n, N(r)=m)\). Give the name and parameters of the distribution.
(Answer) Binomial(\(n-m,\frac{s-r}{t-r}\))
Compute \(\mathbb{E}[N(s) \mid N(t) = n, N(r)=m]\)
(Answer) \((n-m)\frac{s-r}{t-r}\)
Compute the expected amount if time between the first and last arrivals in \([r, t]\) given that \(N(t) = n\) and \(N(r)=m\).
(Solution) For convenience of writing, I will omit the condition on \(N(t) = n\) and \(N(r)=m\) in the expectations. Let \(L\) be the time of the last arrival in the interval, and \(F\) be the time of the first arrival in the interval. Then we seek \[\begin{aligned}\mathbb{E}[L-F] &= \mathbb{E}[L] - \mathbb{E}[F]\\&=t - \mathbb{E}[t-L] - r - \mathbb{E}[F-r] \\&= (t-r) - \mathbb{E}[t-L] - \mathbb{E}[F-r].\end{aligned}\] By symmetry, we can see that \(t-L\) (the time between the last arrival and the end of the interval) and \(F-r\) (the time between the start of the interval and the first arrival) have the same distribution, so \[\mathbb{E}[L-F] = (t-r) - 2\mathbb{E}[F-r].\] Since we have conditioned on \(N(t) = n\) and \(N(r)=m\), we know that there are \(n-m\) arrivals uniformly distributed in the interval by the conditioning property of the poisson process. Therefore, letting \(U_1, \dots U_{n-m}\) be i.i.d uniform\([0, t-r]\) random variables, we have \[\begin{aligned}\mathbb{E}[F-r] &= \mathbb{E}[\min(U_1, \dots U_{n-m})]\\&= \int_0^{t-r}P(\min(U_1, \dots U_{n-m}) > s) ds\\&=\int_0^{t-r}P(U_1>s, \dots U_{n-m}>s) ds\\&=\int_0^{t-r}P(U_1>s)^{n-m} ds\\&=\int_0^{t-r}\left(\frac{t-r-s}{t-r}\right)^{n-m} ds\\&=-(t-r)\frac{\left(\frac{t-r-s}{t-r}\right)^{n-m+1}}{n-m+1} \Big\vert_0^{t-r}\\&=\frac{t-r}{n-m+1}\end{aligned}\] So finally, we have \[\begin{aligned}\mathbb{E}[L-F] &= (t-r) - 2\frac{t-r}{n-m+1}\\&=(t-r)\left(1-\frac{2}{n-m+1}\right)\end{aligned}\] As a sanity check, we check that if \(n-m=1\) we have \(\mathbb{E}[L-F]=0\), and if \(n-m=\infty\) we have \(\mathbb{E}[L-F]=(t-r)\) Renewal Process
Cars queue at a gate. The lengths of the cars are i.i.d with distribution \(F_L\) and mean \(\mu\). Let \(L \sim F_L\). Each successive car stops leaving a gap, distributed according to a uniform distribution on \((0, 1)\), to the car in front (or to the gate in the case of the car at the head of the queue). Consider the number of cars \(N(t)\) lined up within distance t of the gate. Determine \(\lim_{t \to \infty} \frac{\mathbb{E}[N(t)]}{t}\) if
\(L = c\) is a fixed constant
\(L\) is exponentially distributed with parameter \(\lambda\)
Potential customers arrive at a service kiosk in a bank as a Poisson process of rate \(\lambda\). Being impatient, the customers leave immediately unless the assistant is free. Customers are served independently, with mean service time \(\mu\).
Find the mean time between the starts of two successive service periods.
(Answer) \(\mu + 1/\lambda\)
Find the long run rate at which customers are served
(Answer) \(\frac{1}{\mu + 1/\lambda}\)
What proportion of customers who arrive at the bank actually get served?
(Answer) \(\frac{1}{\lambda\mu + 1}\)
Customers arrive at a 24 hour restaurant (which has only one table) according to a poisson process with rate 1 party per hour. If the table is occupied, they leave immediately. If the table is unoccupied, they stay and eat. Customers spend an average of $20 each. The length of time a party stays is uniformly distributed in \([0, N/2 \mathrm{\ hours}]\) where \(N\) is the number of people in the party. Answer the following questions for the cases when \(N\) is uniform\(\{2, 3, 4\}\) and when \(N\) is geometric(\(1/2\)).
Let \(\tau_k\) be the time between when the \((k-1)\)st party finishes eating and the \((k)\)th party finishes eating. Compute \(\mathbb{E}[\tau_k]\)
(Answer) \(\mathbb{E}[N/4]+1\)
Let \(X_k\) be the amount the \((k)\)th party spends. Compute \(\mathbb{E}[X_k]\).
(Answer) \(20\mathbb{E}[N]\)
Let \(S(t)\) the amount total amount paid by all parties by time \(t\). Compute \(\lim_{t\to \infty}\frac{S(t)}{t}\), and justify the validity of your computation.
(Answer) \(\frac{20\mathbb{E}[N]}{\mathbb{E}[N/4]+1}\). Justify by SLLN Useful Formulas
For a discrete random variable \(X\) taking values in \(\{0, 1, 2, \dots\}\), we have \(\mathbb{E}[X] = \sum_{k=0}^\infty P(X > k)\)
For a continuous random variable \(T\) taking values in \([0, \infty)\), we have \(\mathbb{E}[T] = \int_0^\infty P(T > s) ds\) |
I am trying to uderstand the behavior of Dirac spinors in terms of the scalar perturbations of the FRW metric and I was wondering:
Can the perturbation of the spin connection $\delta \omega^{a}_{b \mu}$ be determined by perturbing the first structure equation $de^{a} = -\omega^{a}_{b}\wedge e^{b}$ ? More explicitly, can i use the following
$ d\left(\delta e^{a}\right) = -\delta\omega^{a}_{b}\wedge \bar{e}^{b}-\bar{\omega}^{a}_{b}\wedge \delta e^{b}$
?
In the above, the background quantities are being denoted with a bar. |
Difference between revisions of "User:Nikita2"
m
m
(15 intermediate revisions by the same user not shown) Line 3: Line 3: −
I am Nikita Evseev
+
I am Nikita Evseev Novosibirsk, Russia.
My research interests are in [[Mathematical_analysis | Analysis]] and [[Sobolev space|Sobolev spaces]].
My research interests are in [[Mathematical_analysis | Analysis]] and [[Sobolev space|Sobolev spaces]].
Line 49: Line 49:
[[Sobolev space]] |
[[Sobolev space]] |
[[Vitali theorem]] |
[[Vitali theorem]] |
+
== TeXing ==
== TeXing ==
I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX.
I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX.
−
Now there are '''
+
Now there are '''''' (out of 15,890) articles with [[:Category:TeX done]] tag.
− − − − − − − − − − − − − − − − − − − − − − − − − − − − −
<asy>
<asy>
size(0,150);
size(0,150);
−
real tex=
+
real tex=;
real all=15890;
real all=15890;
pair z0=0;
pair z0=0;
Line 99: Line 71:
arrow("still need TeX",dir(-1*(0.5*theta+d)),2E);
arrow("still need TeX",dir(-1*(0.5*theta+d)),2E);
</asy>
</asy>
− Latest revision as of 21:58, 17 June 2018 Pages of which I am contributing and watching
Analytic function | Cauchy criterion | Cauchy integral | Condition number | Continuous function | D'Alembert criterion (convergence of series) | Dedekind criterion (convergence of series) | Derivative | Dini theorem | Dirichlet-function | Ermakov convergence criterion | Extension of an operator | Fourier transform | Friedrichs inequality | Fubini theorem | Function | Functional | Generalized derivative | Generalized function | Geometric progression | Hahn-Banach theorem | Harmonic series | Hilbert transform | Hölder inequality | Lebesgue integral | Lebesgue measure | Leibniz criterion | Leibniz series | Lipschitz Function | Lipschitz condition | Luzin-N-property | Newton-Leibniz formula | Newton potential | Operator | Poincaré inequality | Pseudo-metric | Raabe criterion | Riemann integral | Series | Sobolev space | Vitali theorem |
TeXing
I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX.
Now there are
3040 (out of 15,890) articles with Category:TeX done tag.
$\quad \rightarrow \quad$ $\sum_{n=1}^{\infty}n!z^n$ Just type $\sum_{n=1}^{\infty}n!z^n$. Today You may look at Category:TeX wanted. How to Cite This Entry:
Nikita2.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Nikita2&oldid=29534 |
Definition:Prime Decomposition Contents Definition
Let $n > 1 \in \Z$.
\(\displaystyle n\) \(=\) \(\displaystyle \prod_{p_i \mathop \divides n} {p_i}^{k_i}\) \(\displaystyle \) \(=\) \(\displaystyle {p_1}^{k_1} {p_2}^{k_2} \cdots {p_r}^{k_r}\)
where:
$p_1 < p_2 < \cdots < p_r$ are distinct primes $k_1, k_2, \ldots, k_r$ are (strictly) positive integers. This unique expression is known as the prime decomposition of $n$.
For each $p_j \in \left\{ {p_1, p_2, \ldots, p_r}\right\}$, its power $k_j$ is known as the
multiplicity of $p_j$. Also known as
The
prime decomposition of $n$ is also known as the prime factorization of $n$. $n$ Prime Decomposition of $n$ $1$ $1$ $2$ $2$ $3$ $3$ $4$ $2^2$ $5$ $5$ $6$ $2 \times 3$ $7$ $7$ $8$ $2^3$ $9$ $3^2$ $10$ $2 \times 5$ $11$ $11$ $12$ $2^2 \times 3$ Also see Results about prime decompositionscan be found here.
The UK English spelling of
prime factorization is prime factorisation. Sources 1971: George E. Andrews: Number Theory... (previous) ... (next): $\text {2-4}$ The Fundamental Theorem of Arithmetic: Exercise $3$ 1971: Allan Clark: Elements of Abstract Algebra... (previous) ... (next): Chapter $1$: Properties of the Natural Numbers: $\S 24$ 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): $\S 13$: The fundamental theorem of arithmetic |
Rule of Simplification/Proof Rule Contents Proof Rule
As a proof rule it is expressed in either of the two forms:
$(1): \quad$ If we can conclude $\phi \land \psi$, then we may infer $\phi$. $(2): \quad$ If we can conclude $\phi \land \psi$, then we may infer $\psi$. It can be written: $\displaystyle {\phi \land \psi \over \phi} \land_{e_1} \qquad \qquad {\phi \land \psi \over \psi} \land_{e_2}$
The Rule of Simplification is invoked for $\phi \land \psi$ in either of the two forms:
Form 1
Pool: The pooled assumptions of $\phi \land \psi$ Formula: $\phi$ Description: Rule of Simplification Depends on: The line containing $\phi \land \psi$ Abbreviation: $\operatorname {Simp}_1$ or $\land \mathcal E_1$ Form 2
Pool: The pooled assumptions of $\phi \land \psi$ Formula: $\psi$ Description: Rule of Simplification Depends on: The line containing $\phi \land \psi$ Abbreviation: $\operatorname {Simp}_2$ or $\land \mathcal E_2$
The first of the two can be expressed in natural language as:
The second of the two can be expressed in natural language as:
The Rule of Simplification can also be referred to as the
rule of and-elimination. Some sources give this as the law of simplification for logical multiplication.
Such treatments may also refer to the Rule of Addition as the
law of simplification for logical addition.
This extra level of wordage has not been adopted by $\mathsf{Pr} \infty \mathsf{fWiki}$, as it is argued that it may cause clarity to suffer.
Also see
{{Simplification|line|pool|statement|depend|1 or 2}}
or:
{{Simplification|line|pool|statement|depend|1 or 2|comment}}
where:
lineis the number of the line on the tableau proof where the Rule of Simplification is to be invoked
poolis the pool of assumptions (comma-separated list)
statementis the statement of logic that is to be displayed in the
Formulacolumn, withoutthe
$ ... $delimiters
dependis the line of the tableau proof upon which this line directly depends
1 or 2should hold 1 for Simplification_1, and 2 for Simplification_2
commentis the (optional) comment that is to be displayed in the
Notescolumn. Sources 1946: Alfred Tarski: Introduction to Logic and to the Methodology of Deductive Sciences(2nd ed.) ... (previous) ... (next): $\S \text{II}.12$: Laws of sentential calculus 1964: Donald Kalish and Richard Montague: Logic: Techniques of Formal Reasoning... (previous) ... (next): $\text{II}$: 'AND', 'OR', 'IF AND ONLY IF': $\S 3$ 1965: E.J. Lemmon: Beginning Logic... (previous) ... (next): $\S 1.3$: Conjunction and Disjunction 1973: Irving M. Copi: Symbolic Logic(4th ed.) ... (previous) ... (next): $3.1$: Formal Proof of Validity 1980: D.J. O'Connor and Betty Powell: Elementary Logic... (previous) ... (next): $\S \text{II}$: The Logic of Statements $(2): \ 1$: Decision procedures and proofs: $7$ 2000: Michael R.A. Huth and Mark D. Ryan: Logic in Computer Science: Modelling and reasoning about systems... (previous) ... (next): $\S 1.2.1$: Rules for natural deduction |
Time scales of epidemic spread and risk perception on adaptive networks
Li-Xin Zhong, Tian Qiu, Fei Ren, Ping-Ping Li, Bi-Hui Chen
arXiv:1011.1621
Incorporating dynamic contact networks and delayed awareness into a contagion model with memory, we study the spreading patterns of infectious diseases in connected populations. It is found that the spread of an infectious disease is not only related to the past exposures of an individual to the infected but also to the time scales of risk perception reflected in the social network adaptation. The epidemic threshold $p_{c}$ is found to decrease with the rise of the time scale parameter s and the memory length T, they satisfy the equation $p_{c} =\frac{1}{T}+ \frac{\omega T}{<k>a^s(1-e^{-\omega T^2/a^s})}$. Both the lifetime of the epidemic and the topological property of the evolved network are considered. The standard deviation $\sigma_{d}$ of the degree distribution increases with the rise of the absorbing time $t_{c}$, a power-law relation $\sigma_{d}=mt_{c}^\gamma$ is found. |
Symmetry and asymptotic behavior of ground state solutions for schrödinger systems with linear interaction
HLM, CEMS, Academy of Mathematics and Systems Science, the Chinese, Academy of Sciences, Beijing 100190, School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
$\left\{ {\begin{array}{*{20}{l}}{ - \Delta u + {\lambda _1}u + \kappa v = {\mu _1}{u^3} + \beta u{v^2}}&{\quad {\rm{ in}}\;\;\Omega ,}\\{ - \Delta v + {\lambda _2}v + \kappa u = {\mu _2}{v^3} + \beta {u^2}v}&{\quad {\rm{ in}}\;\;\Omega ,}\\{u = v = 0\;on\;\;\partial \Omega \;({\rm{or}}\;u,v \in {H^1}({\mathbb{R}^N})\;{\rm{as}}\;\Omega = {\mathbb{R}^N}),}&{}\end{array}} \right.$
$ N≤3, Ω\subseteq\mathbb{R}^N$
$ Ω$
$ Ω$
$ \mathbb{R}^N$
$ κ\to 0^-$
$ c_0$ Keywords:Nonlinear elliptic system, ground state solution, foliated Schwarz symmetric, asymptotic limits. Mathematics Subject Classification:Primary: 35J20, 35J61; Secondary: 35P30. Citation:Zhitao Zhang, Haijun Luo. Symmetry and asymptotic behavior of ground state solutions for schrödinger systems with linear interaction. Communications on Pure & Applied Analysis, 2018, 17 (3) : 787-806. doi: 10.3934/cpaa.2018040
References:
[1] [2] [3] [4] [5] [6]
J. Belmonte-Beitia, V. M. Pérez-García and P. J. Torres,
Solitary waves for linearly coupled nonlinear Schrödinger equations with inhomogeneous coefficients,
[7] [8] [9] [10]
B. Deconinck,
Linearly coupled Bose-Einstein condesates: From Rabi oscillations and quasiperiodic solutions to oscillating domain walls and spiral waves,
[11]
G. W. Dai, R. S. Tian and Z. T. Zhang, Global bifurcation, priori bounds and uniqueness of positive solutions for coupled nonlinear Schrödinger systems, preprint.Google Scholar
[12]
D. Gilbarg and N. S. Trudinger,
[13]
D. S. Hall, M. R. Matthews, J. R. Ensher and C. E. Wieman,
Dynamics of component separation in a binary mixture of Bose-Einstein condensates,
[14] [15]
K. Li and Z. T. Zhang, Existence of solutions for a Schrödinger system with linear and nonlinear couplings,
[16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28]
J. C. Wei and T. Weth,
Nonradial symmetric bound states for a system of coupled Schrödinger equations,
[29] [30] [31]
M. Willem,
[32]
M. Willem,
show all references
References:
[1] [2] [3] [4] [5] [6]
J. Belmonte-Beitia, V. M. Pérez-García and P. J. Torres,
Solitary waves for linearly coupled nonlinear Schrödinger equations with inhomogeneous coefficients,
[7] [8] [9] [10]
B. Deconinck,
Linearly coupled Bose-Einstein condesates: From Rabi oscillations and quasiperiodic solutions to oscillating domain walls and spiral waves,
[11]
G. W. Dai, R. S. Tian and Z. T. Zhang, Global bifurcation, priori bounds and uniqueness of positive solutions for coupled nonlinear Schrödinger systems, preprint.Google Scholar
[12]
D. Gilbarg and N. S. Trudinger,
[13]
D. S. Hall, M. R. Matthews, J. R. Ensher and C. E. Wieman,
Dynamics of component separation in a binary mixture of Bose-Einstein condensates,
[14] [15]
K. Li and Z. T. Zhang, Existence of solutions for a Schrödinger system with linear and nonlinear couplings,
[16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28]
J. C. Wei and T. Weth,
Nonradial symmetric bound states for a system of coupled Schrödinger equations,
[29] [30] [31]
M. Willem,
[32]
M. Willem,
[1]
Jian Zhang, Wen Zhang, Xianhua Tang.
Ground state solutions for Hamiltonian elliptic system with inverse square potential.
[2]
Jian Zhang, Wen Zhang.
Existence and decay property of ground state solutions for Hamiltonian elliptic system.
[3]
Sitong Chen, Xianhua Tang.
Existence of ground state solutions for the planar axially symmetric Schrödinger-Poisson system.
[4]
Francisco Ortegón Gallego, María Teresa González Montesinos.
Existence of a capacity solution to a coupled nonlinear parabolic--elliptic system.
[5]
Yinbin Deng, Wentao Huang.
Positive ground state solutions for a quasilinear elliptic equation with critical exponent.
[6]
Chun Shen, Wancheng Sheng, Meina Sun.
The asymptotic limits of solutions to the Riemann problem for the scaled Leroux system.
[7] [8]
Zhanping Liang, Yuanmin Song, Fuyi Li.
Positive ground state solutions of a quadratically coupled schrödinger system.
[9] [10]
Norihisa Ikoma.
Existence of ground state solutions to the nonlinear Kirchhoff type equations with potentials.
[11]
Dengfeng Lü.
Existence and concentration behavior of ground state solutions for magnetic nonlinear Choquard equations.
[12] [13] [14]
Haiyang He.
Asymptotic behavior of the ground state Solutions for Hénon equation with Robin boundary condition.
[15]
Xiaoyan Lin, Yubo He, Xianhua Tang.
Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential.
[16] [17]
Chuangye Liu, Zhi-Qiang Wang.
A complete classification of ground-states for a coupled nonlinear Schrödinger system.
[18] [19]
Sitong Chen, Junping Shi, Xianhua Tang.
Ground state solutions of Nehari-Pohozaev type for the planar Schrödinger-Poisson system with general nonlinearity.
[20]
Maoding Zhen, Jinchun He, Haoyuan Xu, Meihua Yang.
Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
I am working with the following TransformedDistribution:
Dist = TransformedDistribution[2 v S1 + 2 v S2 - v c, {S1 \[Distributed] BinomialDistribution[1/2 (c + t), p], S2 \[Distributed] BinomialDistribution[1/2 (c - t), 1 - p]}]
This random variable has neat expressions for mean, variance, skewness, and kurtosis:
(-1 + 2 p) t v-4 c (-1 + p) p v^2((1 - 2 p) t v)/(c Sqrt[-c (-1 + p) p v^2])3 - 6/c + 1/(c p - c p^2)
What I am trying to do is compare this random variable, when $t=0$, $c = \frac{1}{v^2}$, $v$ approaches $0$, and $c$ thus approaches infinity, with one drawn from the Normal distribution. Under these restrictions, $\mu$ = 0, $\sigma^2 = 4 (1 - p) p$, $\lambda = 3$, and $\kappa = 0$. Thus, it appears we are dealing with the Normal distribution, but that is not necessarily the case (see:link). |
$$\mathrm{molarity} = \frac{\text{amount of solute}}{\text{volume of solution}} $$ and amount of substance is based on quantity (larger mass means larger amount), so how come it is an intensive property. Shouldn't it be an extensive property?
Concentration is an intensive property. The value of the property does not change with scale. Let me give you an example:
Let us say you had a homogenous mixture (solution) of sodium carbonate in water prepared from 112 g of sodium carbonate dissolved in 1031 g of water.
The concentration (in mass percent, or mass of solute per mass of solution) is:
$$c=\frac{112\text{ g solute}}{(112+1031)\text{g solution}}=0.09799 =9.799\%\text{ sodium carbonate by mass}$$
The concentration is the ratio of sodium carbonate to the total mass of the solution, which does not change if you are dealing with the entire 1143 g of the solution or if you dispense some of that solution into another vessel.
If you dispense 11.7 g of that solution into a flask for a reaction, what is the concentration of sodium carbonate in that flask?
It is still 9.799% by mass. The ratio of the mass of sodium carbonate present to the total mass present has not changed. The
actual mass of sodium carbonate has changed:
$$0.09799\dfrac{\text{g solute}}{\text{g solution}}\times11.7\text{ g solution}=1.15\text{ g solute}$$
The concentration is a property dependent only on the
concentration of the solution, not the amount of solution you have. The concentration of a solution with defined composition is independent of the size of the system.
In general, any property that is a ratio of two extensive properties becomes an intensive property, since both extensive properties will scale similarly with increasing or decreasing size of the system.
Some examples include:
Concentration (including molarity) - ratio of amount of solute (mass, volume, or moles) to amount of solution (mass or volume usually) Density - ratio of mass of a sample to the volume of the sample Specific heat - ratio of heat transferred to a sample to the amount of the sample (mass or moles usually, but volume also)
Each of these intensive properties is a ratio of an extensive property we care about (amount of solute, mass of sample, heat transferred) divided by the scale of the system (amount of stuff usually). This is like finding the slope of a graph showing the relationship between two extensive properties. The graph is linear and the value of slope does not change based on how much stuff you have - thus the slope (the ratio) is an intensive property.
Consider the following picture:
Break the ice block shown in the picture into two equal halves.Now I hope you would be able to answer the following questions:
1.What are the physical properties of ice block which got halved? Absolutely mass,volume,etc.(These are all extensive properties.) 2.What are the physical properties of ice block which remained same? Density,etc.(These are all intensive property.) If you have the doubt so as to why the density remained same,here is the explanation: I hope you know basically even if block got halved,mass per unit volume remains the same in either of the pieces.All the way it mean that density remained the same(mass per unit volume).Thus it is an intensive property. Similarly if you imagine solution instead of ice block,you will find that molarity remains the same even if you divide solution into two equal halves.Thus molarity is a intensive property. |
I wouldn't have asked this question if I hadn't seen this image:
From this image it seems like there are reals that are neither rational nor irrational (dark blue), but is it so or is that illustration incorrect?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I wouldn't have asked this question if I hadn't seen this image:
From this image it seems like there are reals that are neither rational nor irrational (dark blue), but is it so or is that illustration incorrect?
A real number is irrational if and only if it is not rational. By definition any real number is either rational or irrational.
I suppose the creator of this image chose this representation to show that rational and irrational numbers are both part of the bigger set of real numbers. The dark blue area is actually the empty set.
This is my take on a better representation:
Feel free to edit and improve this representation to your liking. I've oploaded the SVG sourcecode to pastebin.
No. The definition of an irrational number is a number which is not a rational number, namely it is not the ratio between two integers.
If a real number is not rational, then by definition it is irrational.
However, if you think about
algebraic numbers, which are rational numbers and irrational numbers which can be expressed as roots of polynomials with integer coefficients (like $\sqrt2$ or $\sqrt[4]{12}-\frac1{\sqrt3}$), then there are irrational numbers which are not algebraic. These are called transcendental numbers.
Irrational means not rational. Can something be not rational, and not not rational? Hint: no.
Of course, the "traditional" answer is no, there are no real numbers that are not rational nor irrational. However, being the contrarian that I am, allow me to provide an alternative interpretation which gives a different answer.
In intuitionistic logic, where the law of excluded middle (LEM) $P\vee\lnot P$ is rejected, things become slightly more complicated. Let $x\in \Bbb Q$ mean that there are two integers $p,q$ with $x=p/q$. Then the traditional interpretation of "$x$ is irrational" is $\lnot(x\in\Bbb Q)$, but we're going to call this "$x$ is not rational" instead. The statement "$x$ is not not rational", which is $\lnot\lnot(x\in\Bbb Q)$, is implied by $x\in\Bbb Q$ but not equivalent to it.
Consider the equation $0<|x-p/q|<q^{-\mu}$ where $x$ is the real number being approximated and $p/q$ is the rational approximation, and $\mu$ is a positive real constant. We measure the accuracy of the approximation by $|x-p/q|$, but don't let the denominator (and hence also the numerator, since $p/q$ is near $x$) be too large by demanding that the approximation be within a power of $q$. The larger $\mu$ is, the fewer pairs $(p,q)$ satisfy the equation, so we can find the least upper bound of $\mu$ such that there are infinitely many coprime solutions $(p,q)$ to the equation, and this defines the irrationality measure $\mu(x)$. There is a nice theorem from number theory that says that the irrationality measure of any irrational algebraic number is $2$, and the irrationality measure of a transcendental number is $\ge2$, while the irrationality measure of any rational number is $1$.
Thus there is a measurable gap between the irrationality measures of rational and irrational numbers, and this yields an alternative "constructive" definition of irrational: let $x\in\Bbb I$, read "$x$ is irrational", if $|x-p/q|<q^{-2}$ has infinitely many coprime solutions. Then $x\in\Bbb I\to x\notin\Bbb Q$, i.e. an irrational number is not rational, and in classical logic $x\in\Bbb I\leftrightarrow x\notin\Bbb Q$, so this is equivalent to the usual definition of irrational. This is viewed as a more constructive definition because rather than asserting a negative (that $x=p/q$ yields a contradiction), it instead gives an infinite sequence of good approximations which verifies the irrationality of the number.
This approach is also similar to the continued fraction method: irrational numbers have infinite simple continued fraction representations, while rational numbers have finite ones, so given an infinite continued fraction representation you automatically know that the limit cannot be rational.
The bad news is that because intuitionistic or constructive logic is strictly weaker than classical logic, it does not prove anything that classical logic cannot prove. Since classical logic proves that every number is rational or irrational, it does not prove that there is a non-rational non-irrational number (assuming consistency), so intuitionistic logic also cannot prove the existence of a non-rational non-irrational number. It just can't prove that this is impossible (it
might be true, for some sense of "might"). On the other hand, there should be a model of the reals with constructive logic + $\lnot$LEM, such that there is a non-rational non-irrational number, and I invite any constructive analysts to supply such examples in the comments.
Every real number is either rational or irrational. The picture is not a good illustration I think. Though notice that a number can not be both irrational and rational (in the picture intersection is empty)
We can represents real numbers on line i.e. real line which contains rationals and irrationals. Now by completeness property of real numbers, which says that real line has no gap. So there is no real number that is neither rational nor irrational.
The set of irrational numbers is the complement of the set of rational numbers, in the set of real numbers. By definition, all real numbers must be either rational or irrational.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Questions on more advanced topics of number theory, such as quadratic residues, primitive roots, prime numbers, non-linear Diophantine equations, etc. Consider first if (elementary-number-theory) might be a more appropriate tag before adding this tag.
Number theory is concerned with the study of natural numbers. One of the main subjects is studying the behavior of prime numbers.
We know that by the prime number theorem, the number of primes less than $x$ is approximately $\frac{x}{\ln(x)}$. Another good approximation is $\operatorname{li}(x)$. Despite these estimates, we don't know much about the maximal prime gaps. The weaker conjectures, such as Legendre's conjecture, Andrica's conjecture and Opperman's conjecture, imply a gap of $O\left(\sqrt{p}\right)$. Stronger conjectures even imply a gap of $O(\ln^2(p))$. The Riemann Hypothesis implies a gap of $O\left(\sqrt{p} \ln(p)\right)$, though proving this is not sufficient to show the RH. The minimal gap is also a subject of research. It has been shown that gaps smaller than or equal to $246$ occur infinitely often. It is conjectured that gaps equal to $2$ occur infinitely often. This is known as the twin prime conjecture.
Another subject in number theory are Diophantine equations, which are polynomial equations in more than one variable, where variables are integer valued. Some equations can be solved by considering terms modulo some number or by considering divisors, prime factors or the number of divisors. Other equations, such as Fermat's Last Theorem, are much harder, and are or were famous open problems. Recent progress usually uses algebraic number theory and the related elliptic curves.
Another subject is the study of number theoretic functions, most notably $\tau(n)$, the number of divisiors of $n$, $\sigma(n)$, the sum of divisors of $n$ and $\varphi(n)$, the Euler-phi function, the number of numbers smaller than $n$ coprime with $n$.
For questions on congruences, linear Diophantine equations, greatest common divisors, etc. , please use the elementary-number-theory tag. This tag is for more advanced topics, such as questions about the distributions of prime numbers, non-linear Diophantine equations, quadratic residues, primitive roots and questions about number theoretic functions. |
I used
Plot[NIntegrate[...]...] to plot a function of 5 different variables. It took really long. Right now I need to integrate this function one more time over the 5th variable and plot the result. Can someone tell me if there is an option to
Nintegrate the plot to speed things up? Maybe there's a way to convert the plot to a list of $x$ and $y$ and then take a sum over $(y2-y1)\,(x2-x1)$ and list plot it or something similar?
I guess it's not gonna work for me. I have the following function:
f[ρ_, ϕ_, ζ_, ξ_, r_] := 2.*100.*20.*0.02*0.2*ρ*Cos[ϕ]*Sech[20*(ρ - 1.)]^2.*ζ*Exp[-ζ]* BesselJ[0, 100.*ζ*Sqrt[r^2. + ρ^2. - 2.*r*ρ*Cos[ϕ]]]*(Sqrt[ζ^2. + 0.02^2.]* Cosh[10.*Sqrt[ζ^2. + 0.02^2.]*(ξ + 1.)] + ζ*Sinh[10.*Sqrt[ζ^2. + 0.02^2.] *(ξ + 1.)])/((2.*ζ^2. + 0.02^2.)*Sinh[10.*Sqrt[ζ^2. + 0.02^2.]] + 2.*ζ*Sqrt[ζ^2. + 0.02^2.]*Cosh[10.*Sqrt[ζ^2. + 0.02^2.]])
Here the function $f$ have 5 variables: $\rho,\ \phi,\ \zeta,\ \xi,\ r.$. First, I need to integrate over $\rho,\ \phi,\ \zeta,\ \xi.$ and plot $f=f(r).$ It takes about 1 hour. I use as I said before
Plot[NIntegrate[f, {ξ, -1., 0.}, {ζ, 0., ∞}, {ϕ, -3.1415, +3.1415}, {ρ, 0., ∞}], {r, 0, 10}]
The next step is I need to integrate one more time over $r$,
{r,0,r1}and plot it over
{r1,0,7}. I tried what you've said, but I failed. Is your method with NDsolve going to work with this kind of function? |
Does anyone here understand why he set the Velocity of Center Mass = 0 here? He keeps setting the Velocity of center mass , and acceleration of center mass(on other questions) to zero which i dont comprehend why?
@amanuel2 Yes, this is a conservation of momentum question. The initial momentum is zero, and since there are no external forces, after she throws the 1st wrench the sum of her momentum plus the momentum of the thrown wrench is zero, and the centre of mass is still at the origin.
I was just reading a sci-fi novel where physics "breaks down". While of course fiction is fiction and I don't expect this to happen in real life, when I tired to contemplate the concept I find that I cannot even imagine what it would mean for physics to break down. Is my imagination too limited o...
The phase-space formulation of quantum mechanics places the position and momentum variables on equal footing, in phase space. In contrast, the Schrödinger picture uses the position or momentum representations (see also position and momentum space). The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product.The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis, and independently by Joe...
not exactly identical however
Also typo: Wavefunction does not really have an energy, it is the quantum state that has a spectrum of energy eigenvalues
Since Hamilton's equation of motion in classical physics is $$\frac{d}{dt} \begin{pmatrix} x \\ p \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \nabla H(x,p) \, ,$$ why does everyone make a big deal about Schrodinger's equation, which is $$\frac{d}{dt} \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \hat H \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} \, ?$$
Oh by the way, the Hamiltonian is a stupid quantity. We should always work with $H / \hbar$, which has dimensions of frequency.
@DanielSank I think you should post that question. I don't recall many looked at the two Hamilton equations together in this matrix form before, which really highlight the similarities between them (even though technically speaking the schroedinger equation is based on quantising Hamiltonian mechanics)
and yes you are correct about the $\nabla^2$ thing. I got too used to the position basis
@DanielSank The big deal is not the equation itself, but the meaning of the variables. The form of the equation itself just says "the Hamiltonian is the generator of time translation", but surely you'll agree that classical position and momentum evolving in time are a rather different notion than the wavefunction of QM evolving in time.
If you want to make the similarity really obvious, just write the evolution equations for the observables. The classical equation is literally Heisenberg's evolution equation with the Poisson bracket instead of the commutator, no pesky additional $\nabla$ or what not
The big deal many introductory quantum texts make about the Schrödinger equation is due to the fact that their target audience are usually people who are not expected to be trained in classical Hamiltonian mechanics.
No time remotely soon, as far as things seem. Just the amount of material required for an undertaking like that would be exceptional. It doesn't even seem like we're remotely near the advancement required to take advantage of such a project, let alone organize one.
I'd be honestly skeptical of humans ever reaching that point. It's cool to think about, but so much would have to change that trying to estimate it would be pointless currently
(lol) talk about raping the planet(s)... re dyson sphere, solar energy is a simplified version right? which is advancing. what about orbiting solar energy harvesting? maybe not as far away. kurzgesagt also has a video on a space elevator, its very hard but expect that to be built decades earlier, and if it doesnt show up, maybe no hope for a dyson sphere... o_O
BTW @DanielSank Do you know where I can go to wash off my karma? I just wrote a rather negative (though well-deserved, and as thorough and impartial as I could make it) referee report. And I'd rather it not come back to bite me on my next go-round as an author o.o |
Chaudhuri, Nirmalendu and Ramaswamy, Mythily (2001)
Existence of positive solutions of some semilinear elliptic equations with singular coefficients. In: Proceedings of the Royal Society of Edinburgh: Mathematics, 131 (6). pp. 1275-1295.
PDF
full.pdf
Restricted to Registered users only
Download (266kB) | Request a copy
Abstract
In this paper, we consider the semilinear elliptic problem in a bounded domain \Omega \subseteq $R^n$, $-{\Delta}u = \frac{\mu}{|x|^{\alpha}}u^{2^{\ast}_{\alpha}-1} + f(x)g(u) in \Omega$ u>0 in \Omega u=0 on \partial\Omega where $\mu \geq 0$, $0 \leq \alpha \leq 2$, $2^{\asterik}_{\alpha} : = 2(n - \alpha ) / (n - 2)$, $f : \Omega \rightarrow R^+$ is measurable, f > 0 a.e, having a lower-order singularity than $|x|^{-2}$ at the origin, and $g : R \rightarrow R$ is either linear or superlinear. For 1 < p < n, we characterize a class of singular functions $\Im_p$ for which the embedding $W_0^{1,p} (\Omega) \hookrightarrow L^p (\Omega, f)$ is compact. When p = 2, $\alpha = 2$, $f \in \Im_2$ and $0 \leq \mu < (\frac {1}{2}\(n-2))^2$, we prove that the linear problem has $H_0^1$ –discrete spectrum. By improving the Hardy inequality we show that for f belonging to a certain subclass of $\Im _2$, the first eigenvalue goes to a positive number as \mu approaches $(\frac{1}{2}\(n-2))^2$ Furthermore, when g is superlinear, we show that for the same subclass of $\Im_2$, the functional corresponding to the differential equation satisfies the Palais-Smale condition if $\alpha = 2$ and a Brezis-Nirenberg type of phenomenon occurs for the case $0 \leq \alpha < 2$.
Item Type: Journal Article Additional Information: Copyright of this article belongs to Royal Society of Edinburgh. Department/Centre: Division of Physical & Mathematical Sciences > Mathematics Depositing User: Mr. Ramesh Chander Date Deposited: 01 Aug 2008 Last Modified: 19 Sep 2010 04:47 URI: http://eprints.iisc.ac.in/id/eprint/15116 Actions (login required)
View Item |
Automata Theory- Brzozowski Algebraic Method
A LaTeX typeset copy of this tutorial is below for download:
Brzozowski_Algebraic_Method.pdf
(126.4K)
Number of downloads: 1369
I. Introduction
Recall that a language is regular if and only if it is accepted by some finite state automaton. In other words, each regular expression has a corresponding finite state automaton. An important problem in Automata Theory deals with converting between regular expressions and finite state automata. The Brzozowski Algebraic Method is an intuitive algorithm which takes a finite state automaton and returns a regular expression. This algorithm relies on notions from graph theory and linear algebra; particularly, graph walks and the substitution method for solving systems of equations.
II. Notions From Graph Theory
In this section, the notions of a graph walk and the adjacency matrix will be introduced, along with some relevant results. I start with the definition of a walk.
Walk: Let G be a graph, and let v = (v_{1}, v_{2}, ..., v_{n}) be a sequence of vertices (not necessarily distinct). Then v is a walk if v_{i}, v_{i+1} are adjacent for all i \in {1, ..., n-1}.
Intuitively, we start at some vertex v_{1}. Then we visit v_{2}, which is a neighbor of v_{1}. Then v_{3} is a neighbor of v_{2}. Consider the cycle graph on four vertices, C_{4}, which is shown below. Some example walks include (v_{a}, v_{b}, v_{d}) and (v_{a}, v_{b}, v_{a}, v_{c}, v_{a}). However, (v_{a}, v_{d}, v_{a}, v_{b}) is not a walk, as v_{a} and v_{d} are not adjacent.
Consider an example of a walk on a directed graph. Observe that (S, A, A, q_{acc}) is a walk, but (S, A, S) is not a walk as there is no directed edge (A, S) in the graph.
The adjacency matrix will now be defined, and we will explore how it relates to the notion of a walk.
Adjacency Matrix: Let G be a graph. The adjacency matrix A \in {0, 1\}^{V \times V}, with A_{ij} = 1 if vertex i is adjacent to vertex j, and A_{ij} = 0 otherwise. Note: If G is undirected, then A_{ij} = A_{ji}. Finite state automata diagrams are directed graphs, though, so A_{ij} may be different than A_{ij}.
Consider the simple C_{4} graph above. The adjacency matrix for this graph is:
Similarly, the adjacency matrix of the above directed graph is:
The adjacency matrix is connected to the notion of a walk by the fact that (A^{n})_{ij} counts the number of walks of length n starting at vertex i and ending at vertex j. The proof of this theorem provides the setup for the Brzozowski Algebraic Method. For this reason, I will provide a formal proof of this theorem.
Theorem: Let G be a graph and let A be its adjacency matrix. Let n \in \mathbb{N}. Then each cell of A^{n}, denoted (A^{n})_{ij}, counts the number of walks of length n starting at vertex i and ending at vertex j. Proof: This theorem will be proven by induction on n \in \mathbb{N}. Consider the base case of n = 1. By definition of the adjacency matrix, if vertices i and j are adjacent, then A_{ij} = 1. Otherwise, A_{ij} = 0. Thus, A counts the number of walks of length 1 in the graph. Thus, the theorem holds at n = 1.
Suppose the theorem holds for an arbitrary integer k > 1. The theorem will be proven true for the k+1 case. As matrix multiplication is associative, A^{k+1} = A^{k} \cdot A. By the inductive hypothesis, A^{k} counts the number of walks of length k in the graph, and A counts the number walks of length 1 in the graph. Consider:
(A^{k+1})_{ij} = \sum_{x=1}^{n} ((A^{k})_{ix} \cdot A_{xj})
And so for each x \in {1, ..., n\}, (A^{k})_{ix} \cdot A_{xj} \neq 0 if and only if there exists at least one walk of length k from vertex i to vertex x, and an edge from vertex k to vertex j. Thus, the theorem holds by the principle of mathematical induction.
Example: Consider again the graph C_{4}, and (A(C_{4}))^{2}, given below. This counts the number of walks of length 2 on G. Observe that ((A(C_{4}))^{2})_{14} states that there are two walks of length 2 from vertex v_{a} to vertex v_{d}. These walks are (v_{a}, v_{b}, v_{d}) and (v_{a}, v_{c}, v_{d}). III. Brzozowski Algebraic Method
The Brzozowski Algebraic Method takes a finite state automata diagram (the directed graph) and constructs a system of linear equations to solve. Solving a subset of these equations will yield the regular expression for the finite state automata. I begin by defining some notation. Let E_{i} denote the regular expression which takes the finite state automata from state q_{0} to state q_{i}.
The system of equations consists of recursive definitions for each E_{i}, where the recursive definition consists of sums of E_{j}R_{ji} products, where R_{ji} is a regular expression consisting of the union of single characters. That is, R_{ji} represents the selection of single transitions from state j to state i, or single edges (j, i) in the graph. So if \delta(q_{j}, a) = \delta(q_{j}, b) = q_{i}, then R_{ji} = (a + b). In other words, E_{j} takes the finite state automata from state q_{0} to q_{j}. Then R_{ji} is a regular expression describing strings that will take the finite state automata from state j to state i in exactly one step. That is:
E_{i} = \sum_{j \in Q, \text{there exists a walk from state j to state i}} E_{j}R_{ji}
Note: Recall that addition when dealing with regular expressions is the set union operation.
Once we have the system of equations, then we solve them by backwards substitution just as in linear algebra and high school algebra.
The explanation of this algorithm is dense, though. Let's work through an example to better understand it. We seek a regular expression over the alphabet \Sigma = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9\} describing those integers whose value is 0 modulo 3.
In order to construct the finite state automata for this language, we take advantage of the fact that a number n \equiv 0 \pmod{3} if and only if the sum of n's digits are also divisible by 3. For example, we know 3|123 because 1 + 2 + 3 = 6, a multiple of 3. However, 125 is not divisible by 3 because 1 + 2 + 5 = 8 is not a multiple of 3.
Now for simplicity, let's partition \Sigma into its equivalence classes a = {0, 3, 6, 9\} (values congruent to 0 mod 3), b = {1, 4, 7\} (values equivalent to 1 mod 3), and c = {2, 5, 8\} (values equivalent to 2 mod 3). Similarly, we let state q_{0} represent a, state q_{1} represent b, and state q_{2} represent c. Thus, the finite state automata diagram is given below, with q_{0} as the accepting halt state:
We consider the system of equations given by E_{i}, taking the FSM from state q_{0} to q_{i}:
E_{0} = \lambda + E_{0}a + E_{1}c + E_{2}b If at q_{0}, transition to q_{0} if we read in the empty string, or if we go from q_{0} \to q_{0} and read in a character in a; or if we go from q_{0} \to q_{2} and read in a character in c; or if we go from q_{0} \to q_{2} and read in a character from b. E_{1} = E_{0}b + E_{1}a + E_{2}c To transition from q_{0} \to q_{1}, we can go from q_{0} \to q_{0} and read in a character from b; go from q_{0} \to q_{1} and read in a character from a; or go from q_{0} \to q_{2} and read in a character from c. E_{2} = E_{0}c + E_{1}b + E_{2}a To transition from q_{0} \to q_{2}, we can go from q_{0} \to q_{0} and read a character from c; go from q_{0} \to q_{0} and read in a character from b; or go from q_{0} \to q_{2} and read in a character from a.
Since q_{0} is the accepting halt state, only a closed form expression of E_{0} is needed.
There are two steps which are employed. The first is to simplify a single equation, then to backwards substitute into a different equation. We repeat this process until we have the desired closed-form solution for the relevant E_{i} (in this case, just E_{0}). In order to simplify a variable, we apply Arden's Lemma, which states that E = \alpha + E\beta = \alpha(\beta)^{*}, where \alpha, \beta are regular expressions.
We start by simplifying E_{2} using Arden's Lemma: E_{2} = (E_{0}c + E_{1}b)a^{*}.
We then substitute E_{2} into E_{1}, giving us E_{1} = E_{0}b + E_{1}a + (E_{0}c + E_{1}b(a)^{*}c = E_{0}(b + ca^{*}c) + E_{1}(c + ba^{*}c). By Arden's Lemma, we get E_{1} = E_{0}(b + ca^{*}c)(a + ba^{*}c)^{*}
Substituting again, E_{0} = \lambda + E_{0}a + E_{0}(b + ca^{*}c)(a + ba^{*}c)^{*}c + (E_{0}c + E_{1}b)a^{*}b.
Expanding out, we get E_{0} = \lambda + E_{0}a + E_{0}(b + ca^{*}c)(a + ba^{*}c)^{*}c + E_{0}ca^{*}b + E_{0}(b + ca^{*}c)(a + ba^{*}c)^{*}a^{*}b.
Then factoring out: E_{0} = \lambda + E_{0}(a + ca^{*}b + (b + ca^{*}c)(a + ba^{*}c)^{*}(c + ba^{*}b) ).
By Arden's Lemma, we have: E_{0} = (a + ca^{*}b + (b + ca^{*}c)(a + ba^{*}c)^{*}(c + ba^{*}b) )^{*}, a closed form regular expression for the integers mod 0 over \Sigma.
Note: The hard part of this algorithm is the careful bookkeeping. In a substitution step, the regular expression for an E_{i} may grow and require simplification. Be careful to keep track of how the regular expression is expanded on a substitution step, and how it is possible to factor out terms. |
Difference between revisions of "Filter"
(→The club filter, the non-stationary ideal and saturation)
(→The club filter, the non-stationary ideal and saturation)
Line 54: Line 54:
* $2^{\aleph_0}=\aleph_{\omega_1}$ implies $2^{\aleph_1}\leq\aleph_{\omega_2}$.
* $2^{\aleph_0}=\aleph_{\omega_1}$ implies $2^{\aleph_1}\leq\aleph_{\omega_2}$.
* $\aleph_1 < 2^{\aleph_0} < \aleph_{\omega_1}$ implies $2^{\aleph_0} = 2^{\aleph_1}$.
* $\aleph_1 < 2^{\aleph_0} < \aleph_{\omega_1}$ implies $2^{\aleph_0} = 2^{\aleph_1}$.
−
* $\aleph_{\omega_1}$
+
* $\aleph_{\omega_1}$ implies $2^{\aleph_{\omega_1}}<\aleph_{\omega_2}$
This hypothesis, which follows from [[Martin's Maximum]], is consistent relative to a [[Woodin]] cardinal, in fact that ideal can be the nonstationary ideal on $\omega_1$. This cannot happen for cardinals larger than $\omega_1$ however: for every cardinal $\kappa\geq\aleph_2$, the nonstationary ideal on $\kappa$ is not $\kappa^+$-saturated.
This hypothesis, which follows from [[Martin's Maximum]], is consistent relative to a [[Woodin]] cardinal, in fact that ideal can be the nonstationary ideal on $\omega_1$. This cannot happen for cardinals larger than $\omega_1$ however: for every cardinal $\kappa\geq\aleph_2$, the nonstationary ideal on $\kappa$ is not $\kappa^+$-saturated.
Revision as of 11:43, 11 November 2017
A
filter on a set $S$ is a special subset of $\mathcal{P}(S)$ that contains $S$ itself, does not contain the empty set, and is closed under finite intersections and the superset relation. An ideal on $S$ is the dual of a filter: if $F$ is a filter, the set of the complements (in $S$) of $F$'s elements forms an ideal, and vice-versa; equivalently, an ideal is a special subset of $\mathcal{P}(S)$ that contains the empty set but not $S$ itself, is closed under finite unions and the subset relation.
An
ultrafiler is a maximal filter, i.e. it is not a subset of any other filter, or equivalently, every subset of $S$ is either in it or its complement (in $S$) is. Filters, and especially ultrafilters, are closely connected to several large cardinal notions, such as measurable cardinals and strongly compact cardinals. The dual notion is a prime ideal. Thus an ultrafilter and its dual prime ideal partitions $\mathcal{P}(S)$ in two.
Intuitively, the members of a filter are the subsets of $S$ "large" enough to satisfy some property. $S$ is always "large enough", while $\empty$ never is. $F$ being closed under finite intersections means that the intersection of two large sets is still large enough - $F$'s sets only differ by a "too small" set. Also, $F$ being closed under the superset relation means that if a set $X$ contains a large enough set then $X$ is also large enough. For example, for any nonempty $X\subset S$, the set of all supersets of $X$ -i.e. the set of all sets "larger" than $X$ - is always a filter. Similarly, the members of an ideal will represent the subsets of $S$ "too small"; $\empty$ is always too small, $S$ never is, the union of two too small sets is still too small and if a set is contained (as a subset) in a too small set, then it is itself too small.
Contents Definitions
A set $F\subseteq\mathcal{P}(S)$ is a
filter on $\mathcal{P}(S)$ (or just "on $S$") if it satisfies the following properties: $\empty\not\in F$ (proper filter), $S\in F$ $X\cap Y\in F$ whenever $X,Y\in F$ (finite intersection property) $Y\in F$ whenever $X\subset Y\subset S$ and $X\in F$ (upward closed / closed under supersets)
A set $I\subseteq\mathcal{P}(S)$ is an
ideal on $\mathcal{P}(S)$ (or just "on $S$") if it satisfies the following properties: $S\not\in I$, $\empty\in I$ $X\cup Y\in F$ whenever $X,Y\in I$ (finite union property) $Y\in I$ whenever $Y\subset X\subset S$ and $X\in I$ (downard closed / closed under subsets)
Given a filter $F$, its
dual ideal is $I=\{S\setminus X : X\in F\}$. Conversely, every ideal has a dual filter. If two filters/ideals are not equal, their duals aren't equal neither.
A filter $F$ is
trivial if $F=\{S\}$. It is principal if there exists $X\subset S$ such that $Y\in F$ if and only if $X\subset Y$. Every nonempty subset $X\subset S$ has an associated principal filter. Similarly, the trivial ideal is $I=\{\empty\}$, and an ideal is principal if there exists $X\subset S$ such that $Y\in I$ if and only if $Y\subset X$.
A filter (resp. an ideal) $F$ is an
ultrafilter (resp. a prime ideal) if for all $X\subset S$, either $X\in F$ or $S\setminus X\in F$. Equivalently, there is no filter (resp. ideal) $F'$ such that $F\subset F'$ but $F\neq F'$ (i.e. $F$ is maximal).
$F$ is $\theta$-complete for a cardinal $\theta$ if for every family $\{X_\alpha : \alpha<\lambda\}$ with $\lambda<\theta$ and $X_\alpha\in F$ for all $\alpha<\lambda$, then $\bigcap_{\alpha<\lambda}X_\alpha\in F$. The completeness of $F$ is the smallest cardinal such that there is a subset $X\subset F$ such that $|X|=\theta$ and $\bigcap X\not\in F$, i.e. it is the largest $\theta$ such that $F$ is $\theta$-complete. Similarly for ideals, by replacing intersections by unions.
A filter $F$ on $\kappa$ is
normal if it is closed under diagonal intersections: $\Delta_{\alpha\in \kappa}X_\alpha = \{\xi\in \kappa : \xi\in\bigcap_{\alpha\in\xi}X_\alpha\}$. That is, for every family $\{X_\alpha : \alpha<\kappa\}$ and $X_\alpha\in F$ for all $\alpha<\kappa$, one have $\Delta_{\alpha<\kappa}X_\alpha\in F$. Similarly for ideals, by replacing intersections by unions.
Whenever a filter is either nontrivial, nonprincipal, $\theta$-complete, normal or maximal, so is its dual ideal.
Properties
The finite intersection property is equivalent to $\aleph_0$-completeness. Every set $G\subset \mathcal{P}(S)$ with the finite intersection property can be extended to a filter, i.e. there exists a filter $F$ such that $G\subset F$. A filter or an ideal being
countably complete (or $\sigma$-complete) means that it is $\aleph_1$-complete. The completeness of a countably complete nonprincipal ultrafilter or prime ideal on S is always a measurable cardinal. However, every countably complete filter on a countable or finite set is principal.
Every cardinal $\kappa\geq\aleph_0$ has $2^{2^\kappa}$ ultrafilters and prime ideals. Under the axiom of choice, every filter can be extended to an ultrafilter, and every ideal can be extended to a prime ideal.
If $G$ is a nonempty sets of filters on S, then $\bigcap G$ is a filter on S. If $G$ is a $\subset$-chain of filters, then $\bigcup G$ is a filter.
Let $j:\mathcal{M}\to \mathcal{N}$ be a (nontrivial) elementary embedding with critical point $\kappa$. Then the set $\mathcal{U}_j=\{x\subset\kappa : \kappa\in j(x)\}$ is a $\kappa$-complete nonprincipal ultrafilter on $(\mathcal{P}(\kappa))^\mathcal{M}$; in particular if $\mathcal{M}=V$ then $(\mathcal{P}(\kappa))^\mathcal{M}=\mathcal{P}(\kappa)$ and thus $\kappa$ is measurable.
The club filter, the non-stationary ideal and saturation
Given a regular uncountable cardinal $\kappa$, the collection of all clubs in $\kappa$ has the finite intersection property, thus it can be extended to a filter. This filter contains precisely the subsets of $\kappa$ with a subset that is a club in $\kappa$. We we call this filter the
club filter of $\kappa$. This filter is $\kappa$-complete and normal (i.e. closed under diagonal intersections).
Let $I_{NS}$, the
nonstationary ideal on $\kappa$, be the dual ideal of the club filter of $\kappa$. This is a normal $\kappa$-complete ideal. Both $I_{NS}$ and the club filter are minimal: if $F$ is a normal filter containing all initial segements $\{\alpha : \alpha_0<\alpha<\kappa \}$ then it contains the club filter of $\kappa$. This means $I_{NS}$ and the clubfilter are not maximal, in particular the club filter is not a normal measure (see below) despite being normal and $\kappa$-complete.
Let $I$ be a $\kappa$-complete ideal on $\kappa$ containing all singletons of elements of $\kappa$. $I$ contains all subsets of $\kappa$ of cardinality less than $\kappa$. We say that $I$ is $\lambda$-saturated if there is no collection $W$ of subsets of $\kappa$ such that $|W|=\lambda$, $I$ and $W$ are disjoint, but the intersection of any two elements of $W$ is in $I$. $\aleph_1$-saturation is called $\sigma$-saturation. $sat(I)$ is the smallest $\lambda$ such that $I$ is $\lambda$-saturated.
An ideal $I$ is prime if and only if $sat(I)=2$. Trivially every ideal is $(2^\kappa)^+$-saturated. Any $\kappa$ carrying a $\sigma$-saturated $\kappa$-complete ideal must be either measurable or $\leq 2^{\aleph_0}$ and real-valued measurable. If there exists a $\kappa$-saturated $\kappa$-complete ideal on $\kappa$, then there is a such ideal that is additionally normal. Same for $\kappa^+$-saturation.
If there exists a $\aleph_2$-saturated ideal on $\omega_1$ then:
$2^{\aleph_0}=\aleph_1$ implies $2^{\aleph_1}=\aleph_2$ $2^{\aleph_0}=\aleph_{\omega_1}$ implies $2^{\aleph_1}\leq\aleph_{\omega_2}$. $\aleph_1 < 2^{\aleph_0} < \aleph_{\omega_1}$ implies $2^{\aleph_0} = 2^{\aleph_1}$. $2^{<\aleph_{\omega_1}}=\aleph_{\omega_1}$ implies $2^{\aleph_{\omega_1}}<\aleph_{\omega_2}$
This hypothesis, which follows from Martin's Maximum, is consistent relative to a Woodin cardinal, in fact that ideal can be the nonstationary ideal on $\omega_1$. This cannot happen for cardinals larger than $\omega_1$ however: for every cardinal $\kappa\geq\aleph_2$, the nonstationary ideal on $\kappa$ is not $\kappa^+$-saturated.
Ultrapowers Main article: Ultrapower Precipitous ideals To be expanded. Measures
Filters are related to the concept of
measures.
Let $|S|\geq\aleph_0$. A (nontrivial $\sigma$-additive)
measure on $S$ is a function $\mu:\mathcal{P}(S)\to[0,+\infty]$ such that: $\mu(\empty)=0$, $\mu(S)>0$ $\mu(X)\leq\mu(Y)$ whenver $X\subset Y$ Let $\{X_n : n<\omega\}$ such that $X_i\cap X_j=\empty$ whenever $i<j$, then $\mu(\bigcup_{n<\omega}X_n)=\sum_{n=0}^{\infty}\mu(X_n)$
$\mu$ is
probabilist if $\mu(S)=1$. $\mu$ is nontrivial because there exists a set $A$ of positive measure, i.e. $\mu(A)>0$, since we required $\mu(S)>0$.
$\mu$ is $\theta$-additive if $\{X_\alpha : \alpha<\lambda\}$ with $\lambda<\theta$ is such that $X_i\cap X_j=\empty$ whenever $i<j$, then $\mu(\bigcup_{\alpha<\lambda}X_\alpha)=\sum_{\alpha<\lambda}\mu(X_\alpha)$. Every measure $\mu$ is $\aleph_1$-additive (i.e. countably additive / $\sigma$-additive).
$\mu$ is
2-valued (or 0-1-valued) if for all $X\subset S$, either $\mu(X)=0$ or $\mu(X)=1$. A set $A\subset S$ such that $\mu(A)>0$ is an atom for $\mu$ if $\mu(X)=0$ or $\mu(X)=\mu(A)$ for all $X\subset A$. $\mu$ is atomless if it has no atoms.
A set $X\subset S$ is
null if $\mu(X)=0$. Properties Let $\mu$ be a 2-valued measure on $S$. Then $\{X\subset S : \mu(X)=1\}$ is a $\sigma$-complete ultrafilter on $S$. Conversely, if $F$ is a $\sigma$-complete ultrafilter on $S$ then the funcion $\mu:P(S)\to[0,1]$ defined by "$\mu(X)=1$ if $X\in F$, $\mu(X)=0$ otherwise" is a 2-valued measure on $S$. If $\mu$ has an atom $A$, the set $\{X\subset S : \mu(X\cap A)=\mu(A)\}$ is a $\sigma$-complete ultrafilter on $S$. If $\mu$ is atomless (i.e. has no atoms), $\mu(\{x\})=0$ for every $x\in S$. In fact, $\mu(X)=0$ for every finite or countably infinite set $X\subset S$. Thus every measure on a countable set has an atom (otherwise $\mu(S)$ would be $0$, contradicting the nontriviality of $\mu$). If $\mu$ is atomless, then every set $X\subset S$ of positive measure is the disjoint union of two sets of positive measure, also, $\mu$ has a continuum of different values and if $A$ is a set of positive measure, then for every $b\in [0;\mu(A)]$, there exists $B\subset A$ such that $\mu(B)=b$. Normal fine measures and large cardinals
Let $\mathcal{P}_\kappa(A)$ for $|A|\geq\kappa$ be the set of all subsets of $A$ of cardinality at most $\kappa$.
A filter $F$ on $\mathcal{P}_\kappa(A)$ is a
fine filter if for every $a\in A$, the set $\{x\in \mathcal{P}_\kappa(A) : a\in x\}\in F$. If $F$ is also $\sigma$-complete and an ultrafilter, it is called a fine measure because it can be identified with its dual measure $\mu$ defined by $\mu(X)=1$ iff $X\in F$, $0$ otherwise.
A fine measure $F$ on $\mathcal{P}_\kappa(A)$ is
normal if for every function $f:\mathcal{P}_\kappa(A)\to A$, if the set $\{x\in \mathcal{P}_\kappa(A) : f(x)\in x\}\in F$ then $f$ is constant on a set in $F$, i.e. there is $k\in A$ such that $F$ also contains the set $\{x\in \mathcal{P}_\kappa(A) : f(x)=k\}$. Note that normal fine measures are also normal in the sense that they are closed under diagonal intersections, i.e. for every family $\{X_\alpha : \alpha<\kappa\}$ and $X_\alpha\in F$ for all $\alpha<\kappa$, one has $\Delta_{\alpha<\kappa}X_\alpha\in F$.
If there exists a 2-valued $\kappa$-additive measure on $\kappa$, then $\kappa$ is a measurable cardinal. This equivalent to saying that there is a $\kappa$-complete nonprincipal ultrafilter on $\kappa$. If $j:V\to\mathcal{M}$ is a nontrivial elementary embedding with critical point $\kappa$, then $\mathcal{U}_j=\{x\subset\kappa : \kappa\in j(x)\}$ is a $\kappa$-complete nonprincipal ultrafilter on $\mathcal{P}(\kappa)$ and $\kappa$ is measurable. In fact, $\mathcal{U}_j$ is a normal fine measure on $\kappa$, which we can call the "canonical" normal fine measure generated by $j$.
If, for every set $S$, every $\kappa$-complete filter on $S$ can be extended to a $\kappa$-complete ultrafilter on $S$, then $\kappa$ is
strongly compact. The converse is also true, every strongly compact cardinal has this property. Not that nonprincipality is not required here. Every strongly compact cardinal is measurable, and it is consistent that the first measurable and the first strongly compact cardinals are equal. Strong compactness is furthermore equivalent to the assertion that for every set $A$ such that $|A|\geq\kappa$ there exists a fine measure on $\mathcal{P}_\kappa(A)$. Those measures don't have to be normal.
If there is a set $\lambda$ with $\lambda\geq\kappa$ such that there is a normal fine measure on $\mathcal{P}_\kappa(\lambda)$, then $\kappa$ is $\lambda$-supercompact; if it is $\lambda$-supercompact for every $\lambda\geq\kappa$, then it is
supercompact. This is equivalent to saying that for every set $A$ with $|A|\geq\kappa$, there is a normal fine measure on $\mathcal{P}_\kappa(A)$. Clearly, every supercompact is strongly compact by the last characterization of strong compactness. It is open whether supercompactness is stronger than strong compactness consistency-wise.
Every set in a normal measure is stationary, also every measurable cardinal carries a normal measure containing the set of all inaccessible, Mahlo, and even Ramsey cardinals below it. Every supercompact cardinal $\kappa$ carries $2^{2^\kappa}$ normal measures. |
I have come up with the following understanding about Expectation of RV.
Please, correct me if I am wrong.
Definition:Expected value of a RV says that, if a random experiment is repeated n number of times, what would be the value that you expect to see most of the times.Expected value of a Random Variable is a value. It is not a probability. Example:In case of a dice, if the dice is rolled $n$ number of times, the value we would see most of the times is 3.5. Calculation:We don't have to repeat the same experiment $n$ number of times to find the expected value of a random variable. It can be done using the random variable and its associated probability distribution.In case of a dice,
$ \sum_{x=1}^6 x(1/6) = 1*(1/6) + 2*(1/6) + 3*(1/6) + 4*(1/6) + 5*(1/6) + 6*(1/6) = 3.5 $
Continuous Case
If $X$ is a continuous random variable with density function $f_X(x),$ then the expectation of $X$ is $\displaystyle E(X)=\int_{-\infty}^\infty x\,f_X(x)\,dx$
If $Y=g(X)$ is a function of $X$(i.e. a
density function)$,$ then$\displaystyle E(Y)=\int_{-\infty}^\infty g(x)\,f_X(x)\,dx$ Edit:
After reading comments and answers,
Definition:
Expected value of a RV says that, if a random experiment is repeated n number of times, what would be the average value of the outcomes.
Expected value of a Random Variable is a value. It is not a probability.
Example:
In case of a dice, if the dice is rolled $n$ number of times, the average value of the outcomes would be 3.5. |
IMO transaction data is a better approach, because you have both sides of the trade agreeing that the price is "right." The literature tends to decompose the transaction price $P$ into a true/efficient price $P^e$ plus micro-structure noise, which I think originates from Hasbrouck '93 in the Review of Financial Studies. So you end up with something like $$P^e_t = P^e_{t-1} + \nu$$ and $$P_t = round(P^e_t + c_t Q_t, d)$$ where $\nu \sim N(0, \sigma^2_t)$, $c_t > 0$, $Q_t \in \left\{-1, 1 \right\}$, and $d$ is the tick size. Note that $c_t$ provides the spread and $Q_t$ tells you if the transaction is buyer or seller initiated (typically determined with the "Lee-Ready algorithm"). I found this particular presentation in a 2002 working paper from Engle and Russell (edit: titled
Analysis of High Frequency Data); I think this is pretty standard and you can probably find a good deal of research that tries to provide $c_t = f(\cdot)$. It looks like a Andersen, Bollerslev, and Diebold have a 2007 NBER working paper (edit: titled Roughing it Up: Including Jump Components in the Measurement, Modeling and Forecasting of Return Volatility) that provides a more thorough treatment of these ideas.
When you're dealing with (ultra) high-frequency data you also have the problem of
time to transaction. Engle has a 2000 Econometrica paper (edit: titled The Econometrics of Ultra-High-Frequency Data) in which he describes how to account for time to transaction, but he's using bid-ask midpoints, not transactions.
I don't have any first-hand experience to know if using the midpoint is a bad assumption in practice, but the 2000 and 2007 papers should be a good start. |
Electronic Journal of Probability Electron. J. Probab. Volume 17 (2012), paper no. 8, 35 pp. Distributional properties of exponential functionals of Lévy processes Abstract
We study the distribution of the exponential functional $I(\xi,\eta)=\int_0^{\infty} \exp(\xi_{t-}) d \eta_t$, where $\xi$ and $\eta$ are independent Lévy processes. In the general setting, using the theory of Markov processes and Schwartz distributions, we prove that the law of this exponential functional satisfies an integral equation, which generalizes Proposition 2.1 in \cite{CPY}. In the special case when $\eta$ is a Brownian motion with drift, we show that this integral equation leads to an important functional equation for the Mellin transform of $I(\xi,\eta)$, which proves to be a very useful tool for studying the distributional properties of this random variable. For general Lévy process $\xi$ ($\eta$ being Brownian motion with drift) we prove that the exponential functional has a smooth density on $\mathbb{R} \setminus \{0\}$, but surprisingly the second derivative at zero may fail to exist. Under the additional assumption that $\xi$ has some positive exponential moments we establish an asymptotic behaviour of $\mathbb{P}(I(\xi,\eta)>x)$ as $x\to +\infty$, and under similar assumptions on the negative exponential moments of $\xi$ we obtain a precise asymptotic expansion of the density of $I(\xi,\eta)$ as $x\to 0$. Under further assumptions on the Lévy process $\xi$ one is able to prove much stronger results about the density of the exponential functional and we illustrate some of the ideas and techniques for the case when $\xi$ has hyper-exponential jumps.
Article information Source Electron. J. Probab., Volume 17 (2012), paper no. 8, 35 pp. Dates Accepted: 25 January 2012 First available in Project Euclid: 4 June 2016 Permanent link to this document https://projecteuclid.org/euclid.ejp/1465062330 Digital Object Identifier doi:10.1214/EJP.v17-1755 Mathematical Reviews number (MathSciNet) MR2878787 Zentralblatt MATH identifier 1246.60073 Subjects Primary: 60G51: Processes with independent increments; Lévy processes Rights This work is licensed under aCreative Commons Attribution 3.0 License. Citation
Kuznetsov, Alexey; Pardo, Juan Carlos; Savov, Mladen. Distributional properties of exponential functionals of Lévy processes. Electron. J. Probab. 17 (2012), paper no. 8, 35 pp. doi:10.1214/EJP.v17-1755. https://projecteuclid.org/euclid.ejp/1465062330 |
One of the forms of algebraic expressions is power expression. It is in the form of a
n, where a is called as the base, and n is called as the exponent. The word exponent is so common that the power expressions are described as exponential expressions.
The value of the expression is the product of the base when multiplied by exponent number of times. That is, the product of
a multiplied by 'n" number of times. The exponent can be any real number. We will discuss here the case when the exponent is a fraction.
Related Calculators Fractional Exponents Calculator Exponent Fraction Calculator Fractions with Exponents Calculator Negative Fractional Exponents Calculator
The basic concept behind the fractional exponents are the laws of indices,
a
m x a n = a m + n and a m ÷ a n = a m – n
If we use fractional exponents, the expression becomes
$a^{\frac{m}{n}} \times a^{\frac{m'}{n'}} = a^{\frac{m}{n} + \frac{m'}{n'}} = a^{\frac{mn' + m'n}{nn'}}$
and
$a^{\frac{m}{n}} \div a^{\frac{m'}{n'}} = a^{\frac{m}{n} - \frac{m'}{n'}} = a^{\frac{mn' - m'n}{nn'}}$
Let us take a simple case.
We know that, a is same as a
1, when expressed in exponential form.
Let the expression a
n represents the square root of a as a fractional exponent. Then by definition,
a
n x a n = a 1 or a n + n = a 1 or a 2n = a 1
Therefore, 2n = 1 or n = $\frac{1}{2}$
Hence the square of
a = a 1/2
Generalizing the same concept, it can be established $\sqrt[n]{a}$ = a
1/n
The numerator of the fraction in a fractional exponent need not be 1 always and also it could be an improper fraction.
Let, a
m/n be a power expression with a fractional exponent.
Then as per law of indices, a
m/n = (a m) 1/n
= $\sqrt[n]{a^m}$, as per the concept established earlier.
The concept of fractional exponents greatly helps in simplifying radicals.
Example 1: Simplify 32 1/4 Solution: 32 1/4 = $\sqrt[4]{32}$= $\sqrt[4]{(2)(2)(2)(2)(2)}$ = 2 5/4 Example 2: Simplify 4 3/2 Solution: 4 3/2 = $\sqrt{4^3}$ = $\sqrt{64}$ = 8 |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
ISSN:
1930-5346
eISSN:
1930-5338
All Issues
Advances in Mathematics of Communications
May 2016 , Volume 10 , Issue 2
Select all articles
Export/Reference:
Abstract:
For projective varieties defined over a finite field ${\mathbb F}_q$ we show that they contain a unique subvariety that satisfies the Finite Field Nullstellensatz property [1,2], for homogeneous linear polynomials over ${\mathbb F}_q$. Using these subvarieties we construct linear codes and estimate some of their parameters.
Abstract:
Triangular transformation method (TTM) is one of the multivariate public key cryptosystems (MPKC) based on the intractability of tame decomposition problem. In TTM, a special class of tame automorphisms are used to construct encryption schemes. However, because of the specificity of such tame automorphisms, it is important to evaluate the computational complexity of the tame decomposition problem for secure use of MPKC. In this paper, as the first step for security evaluations, we focus on Matsumoto-Imai cryptosystems. We shall prove that the Matsumoto-Imai central maps in three variables over $\mathbb{F}_{2}$ is tame, and we describe the tame decompositions of the Matsumoto-Imai central maps.
Abstract:
This article aims to explore the bridge between the algebraic structure of a linear code and the complete decoding process. To this end, we associate a specific binomial ideal $I_+(\mathcal C)$ to an arbitrary linear code. The binomials involved in the reduced Gröbner basis of such an ideal relative to a degree-compatible ordering induce a uniquely defined test-set for the code, and this allows the description of a Hamming metric decoding procedure. Moreover, the binomials involved in the Graver basis of $I_+(\mathcal C)$ provide a universal test-set which turns out to be a set containing the set of codewords of minimal support of the code.
Abstract:
In this paper, cyclic codes over the Galois ring ${\rm GR}({p^2},s)$ are studied. The main result is the characterization and enumeration of Hermitian self-dual cyclic codes of length $p^a$ over ${\rm GR}({p^2},s)$. Combining with some known results and the standard Discrete Fourier Transform decomposition, we arrive at the characterization and enumeration of Euclidean self-dual cyclic codes of any length over ${\rm GR}({p^2},s)$.
Abstract:
This paper revisits strongly-MDS convolutional codes with maximum distance profile (MDP). These are (non-binary) convolutional codes that have an optimum sequence of column distances and attains the generalized Singleton bound at the earliest possible time frame. These properties make these convolutional codes applicable over the erasure channel, since they are able to correct a large number of erasures per time interval. The existence of these codes have been shown only for some specific cases. This paper shows by construction the existence of convolutional codes that are both strongly-MDS and MDP for all choices of parameters.
Abstract:
The projection construction has been used to construct semifields of odd characteristic using a field and a twisted semifield [Commutative semifields from projection mappings,
Designs, Codes and Cryptography, 61(2011), 187--196]. We generalize this idea to a projection construction using two twisted semifields to construct semifields of odd characteristic. Planar functions and semifields have a strong connection so this also constructs new planar functions. Abstract:
The interpolation-based decoding that was developed for general evaluation AG codes is shown to be equally applicable to general differential AG codes. A performance analysis of the decoding algorithm, which is parallel to that of its companion algorithm, is reported. In particular, the decoding capacities of evaluation AG codes and differential AG codes are seen to be interrelated symmetrically. As an interesting special case, a decoding algorithm for classical Goppa codes is presented.
Abstract:
A sequence is called perfect if its autocorrelation function is a delta function. In this paper, we give a new definition of autocorrelation function: $\omega$-cyclic-conjugated autocorrelation. As a result, we present several classes of $\omega$-cyclic-conjugated-perfect quaternary Golay sequences, where $\omega=\pm 1$. We also considered such perfect property for $4^q$-QAM Golay sequences, $q\ge 2$ being an integer.
Abstract:
Communication over channels that may vary in an arbitrary and unknown manner from channel use to channel use is studied. Such channels fall in the framework of
arbitrarily varying channels (AVCs), for which it has been shown that the classical deterministic approaches with pre-specified encoder and decoder fail if the AVC is symmetrizable. However, more sophisticated strategies such as common randomness (CR)assisted codes or list decodingare capable to resolve the ambiguity induced by symmetrizable AVCs. AVCs further serve as the indispensable basis for modeling adversarial attacks such as jamming in information theoretic security related communication problems. In this paper, we study the arbitrarily varying multiple access channel (AVMAC) with conferencing encoders, which models the communication scenario with two cooperating transmitters and one receiver. This can be motivated for example by cooperating base stations or access points in future systems. The capacity region of the AVMAC with conferencing encoders is established and it is shown that list decoding allows for reliable communication also for symmetrizable AVMACs. The list capacity region equals the CR-assisted capacity region for large enough list size. Finally, for fixed probability of decoding error the amount of resources, i.e., CR or list size, is quantified and shown to be finite. Abstract:
We present bounds on the number of points in algebraic curves and algebraic hypersurfaces in $\mathbb{P}^n(\mathbb{F}_q)$ of small degree $d$, depending on the number of linear components contained in such curves and hypersurfaces. The obtained results have applications to the weight distribution of the projective Reed-Muller codes PRM$(q,d,n)$ over the finite field $\mathbb{F}_q$.
Abstract:
The geometric structure of any relative one-weight code is determined, and by using this geometric structure, the support weight distribution of subcodes of any relative one-weight code is presented. An application of relative one-weight codes to the wire-tap channel of type II with multiple users is given, and certain kinds of relative one-weight codes all of whose nonzero codewords are minimal are determined.
Abstract:
We study codes over the commutative local Frobenius rings of order 16 with maximal ideals of size 8. We define a weight preserving Gray map and study the images of these codes as binary codes. We study self-dual codes and determine when they exist.
Abstract:
Ternary constant weight codes of length $n=2^m$, weight $n-1$, cardinality $2^n$ and distance $5$ are known to exist for every $m$ for which there exists an APN permutation of order $2^m$, that is, at least for all odd $m \geq 3$ and for $m=6$. We show the non-existence of such codes for $m=4$ and prove that any codes with the parameters above are diameter perfect.
Abstract:
A sequence of period $n$ is called a nearly perfect sequence of type $\gamma$ if all out-of-phase autocorrelation coefficients are a constant $\gamma$. In this paper we study nearly perfect sequences (NPS) via their connection to direct product difference sets (DPDS). We prove the connection between a $p$-ary NPS of period $n$ and type $\gamma$ and a cyclic $(n,p,n,\frac{n-\gamma}{p}+\gamma,0,\frac{n-\gamma}{p})$-DPDS for an arbitrary integer $\gamma$. Next, we present the necessary conditions for the existence of a $p$-ary NPS of type $\gamma$. We apply this result for excluding the existence of some $p$-ary NPS of period $n$ and type $\gamma$ for $n \leq 100$ and $\vert \gamma \vert \leq 2$. We also prove the similar results for an almost $p$-ary NPS of type $\gamma$. Finally, we show the non-existence of some almost $p$-ary perfect sequences by showing the non-existence of equivalent cyclic relative difference sets by using the notion of multipliers.
Abstract:
It is well known that the main problem of decoding the extended Reed-Solomon codes is computing the error distance of a word. Using some algebraic constructions, we are able to determine the error distance of words whose degrees are $k+1$ and $k+2$ to the extended Reed-Solomon codes. As a corollary, we can simply get the results of Zhang-Fu-Liao on the deep hole problem of Reed-Solomon codes.
Abstract:
Compressed sensing is a technique which is to used to reconstruct a sparse signal given few measurements of the signal. One of the main problems in compressed sensing is the deterministic construction of the sensing matrix. Li et al. introduced a new deterministic construction via algebraic-geometric codes (AG codes) and gave an upper bound for the coherence of the sensing matrices coming from AG codes. In this paper, we give the exact value of the coherence of the sensing matrices coming from AG codes in terms of the minimum distance of AG codes and deduce the upper bound given by Li et al. We also give formulas for the coherence of the sensing matrices coming from Hermitian two-point codes.
Abstract:
Let $\mathbb{F}_{p^m}$ be a finite field with $p^m$ elements, where $p$ is an odd prime, and $m$ is a positive integer. Let $h_1(x)$ and $h_2(x)$ be minimal polynomials of $-\pi^{-1}$ and $\pi^{-\frac{p^k+1}{2}}$ over $\mathbb{F}_p$, respectively, where $\pi $ is a primitive element of $\mathbb{F}_{p^m}$, and $k$ is a positive integer such that $\frac{m}{\gcd(m,k)}\geq 3$. In [23], Zhou et al. obtained the weight distribution of a class of cyclic codes over $\mathbb{F}_p$ with parity-check polynomial $h_1(x)h_2(x)$ in the following two cases:
• $k$ is even and $\gcd(m,k)$ is odd;
• $\frac{m}{\gcd(m,k)}$ and $\frac{k}{\gcd(m,k)}$ are both odd. In this paper, we further investigate this class of cyclic codes over $\mathbb{F}_p$ in other cases. We determine the weight distribution of this class of cyclic codes.
Abstract:
In this paper we study the family of cyclic codes such that its minimum distance reaches the maximum of its BCH bounds. We also show a way to construct cyclic codes with that property by means of computations of some divisors of a polynomial of the form $x^n-1$. We apply our results to the study of those BCH codes $C$, with designed distance $\delta$, that have minimum distance $d(C)=\delta$. Finally, we present some examples of new binary BCH codes satisfying that condition. To do this, we make use of two related tools: the discrete Fourier transform and the notion of apparent distance of a code, originally defined for multivariate abelian codes.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know? |
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr... |
Let \( A_{a, b} = \{ (x, y) \in \mathbb{Z}^2 : 1 \leq x \leq a, 1 \leq y \leq b \} \). Consider the following property, which we call Property R:
“If each of the points in \(A\) is colored red, blue, or yellow, then there is a rectangle whose sides are parallel to the axes and vertices have the same color.”
Find the maximum of \(|A_{a, b}|\) such that \( A_{a, b} \) has Property R but \( A_{a-1, b} \) and \( A_{a, b-1} \) do not.
GD Star Rating loading...
1. There will be no POW this week due to 추석 (thanksgiving) break. POW will resume next week.
2. The submission due for POW2019-12 is extended to Sep. 18 (Wed.).
GD Star Rating loading...
Let \(I, J\) be connected open intervals such that \(I \cap J\) is a nonempty proper sub-interval of both \(I\) and\(J\). For instance, \(I = (0, 2)\) and \(J = (1, 3)\) form an example.
Let \(f\) (\(g\), resp.) be an orientation-preserving homeomorphism of the real line \(\mathbb{R}\) such that the set of points of \(\mathbb{R}\) which are not fixed by \(f\) (\(g\), resp.) is precisely \(I\) (\(J\), resp.).
Show that for large enough integer \(n\), the group generated by \(f^n, g^n\) is isomorphic to the group with the following presentation
\[ <a, b | [ab^{-1}, a^{-1}ba] = [ab^{-1}, a^{-2}ba^2] = id>. \]
GD Star Rating loading...
Find the smallest prime number \( p \geq 5 \) such that there exist no integer coefficient polynomials \( f \) and \( g \) satisfying
\[
p | ( 2^{f(n)} + 3^{g(n)})
\]
for all positive integers \( n \).
The best solution was submitted by 김태균 (수리과학과 2016학번). Congratulations!
Here is his solution of problem 2019-11.
Other solutions were submitted by 고성훈 (2018학번, +3), 조재형 (수리과학과 2016학번, +3), 채지석 (수리과학과 2016학번, +3), 최백규 (생명과학과 2016학번, +3).
GD Star Rating loading...
Let \(G\) be a group. A topology on \(G\) is said to be a group topology if the map \(\mu: G \times G \to G\) defined by \(\mu(g, h) = g^{-1}h\) is continuous with respect to this topology where \(G \times G\) is equipped with the product topology. A group equipped with a group topology is called a topological group. When we have two topologies \(T_1, T_2\) on a set S, we write \(T_1 \leq T_2\) if \(T_2\) is finer than \(T_1\), which gives a partial order on the set of topologies on a given set. Prove or disprove the following statement: for a give group \(G\), there exists a unique minimal group topology on \(G\) (minimal with respect to the partial order we described above) so that \(G\) is a Hausdorff space?
The best solution was submitted by 이정환 (수리과학과 2015학번). Congratulations!
Here is his solution of problem 2019-10.
An incomplete solutions were submitted by 채지석 (수리과학과 2016학번, +2).
GD Star Rating loading...
Find the smallest prime number \( p \geq 5 \) such that there exist no integer coefficient polynomials \( f \) and \( g \) satisfying
\[ p | ( 2^{f(n)} + 3^{g(n)}) \] for all positive integers \( n \).
GD Star Rating loading...
For the 10th problem for POW this year, I added a condition that we only consider the group topologies which make the given group a Hausdorff space. Since the problem has been modified, I decided to extend the deadline for this problem. Please hand in your solution by 12pm on Friday (May 31st).
GD Star Rating loading...
Let \(G\) be a group. A topology on \(G\) is said to be a group topology if the map \(\mu: G \times G \to G\) defined by \(\mu(g, h) = g^{-1}h\) is continuous with respect to this topology where \(G \times G\) is equipped with the product topology. A group equipped with a group topology is called a topological group. When we have two topologies \(T_1, T_2\) on a set S, we write \(T_1 \leq T_2\) if \(T_2\) is finer than \(T_1\), which gives a partial order on the set of topologies on a given set. Prove or disprove the following statement: for a give group \(G\), there exists a unique minimal group topology on \(G\) (minimal with respect to the partial order we described above) so that \(G\) is a Hausdorff space?
GD Star Rating loading...
Suppose that \( X \) is a discrete random variable on the set \( \{ a_1, a_2, \dots \} \) with \( P(X=a_i) = p_i \). Define the discrete entropy
\[
H(X) = -\sum_{n=1}^{\infty} p_i \log p_i.
\]
Find constants \( C_1, C_2 \geq 0 \) such that
\[
e^{2H(X)} \leq C_1 Var(X) + C_2
\]
holds for any \( X \).
The best solution was submitted by 길현준 (2018학번). Congratulations!
Here is his solution of problem 2019-09.
Alternative solutions were submitted by 최백규 (생명과학과 2016학번, +3). Incomplete solutions were submitted by, 이정환 (수리과학과 2015학번, +2), 채지석 (수리과학과 2016학번, +2).
GD Star Rating loading...
Suppose that \( X \) is a discrete random variable on the set \( \{ a_1, a_2, \dots \} \) with \( P(X=a_i) = p_i \). Define the discrete entropy
\[ H(X) = -\sum_{n=1}^{\infty} p_i \log p_i. \] Find constants \( C_1, C_2 \geq 0 \) such that \[ e^{2H(X)} \leq C_1 Var(X) + C_2 \] holds for any \( X \).
GD Star Rating loading... |
ISSN:
1930-8337
eISSN:
1930-8345
All Issues
Inverse Problems & Imaging
February 2011 , Volume 5 , Issue 1
Select all articles
Export/Reference:
Abstract:
We investigate iterated Tikhonov methods coupled with a Kaczmarz strategy for obtaining stable solutions of nonlinear systems of ill-posed operator equations. We show that the proposed method is a convergent regularization method. In the case of noisy data we propose a modification, the so called loping iterated Tikhonov-Kaczmarz method, where a sequence of relaxation parameters is introduced and a different stopping rule is used. Convergence analysis for this method is also provided.
Abstract:
Detecting and identifying targets or objects that are present in hyperspectral ground images are of great interest. Applications include land and environmental monitoring, mining, military, civil search-and-rescue operations, and so on. We propose and analyze an extremely simple and efficient idea for template matching based on $l_1$ minimization. The designed algorithm can be applied in hyperspectral classification and target detection. Synthetic image data and real hyperspectral image (HSI) data are used to assess the performance, with comparisons to other approaches, e.g. spectral angle map (SAM), adaptive coherence estimator (ACE), generalized-likelihood ratio test (GLRT) and matched filter. We demonstrate that this algorithm achieves excellent results with both high speed and accuracy by using Bregman iteration.
Abstract:
We present an optimal strategy for the relative weighting of multiple data modalities in inverse problems, and derive the maximum compatibility estimate (MCE) that corresponds to the maximum likelihood or maximum a posteriori estimates in the case of a single data mode. MCE is not explicitly dependent on the noise levels, scale factors or numbers of data points of the complementary data modes, and can be determined without the mode weight parameters. We also discuss discontinuities in the solution estimates in multimodal inverse problems, and derive a corresponding self-consistency criterion. As a case study, we consider the problem of reconstructing the shape and the spin state of a body in $\R^3$ from the boundary curves (profiles) and volumes (brightness values) of its generalized projections in $\R^2$. We also show that the generalized profiles uniquely determine a large class of shapes. We present a solution method well suitable for adaptive optics images in particular, and discuss various choices of regularization functions.
Abstract:
In this paper, we prove the global uniqueness of determining both the magnetic field and the electrical potential by boundary measurements in two-dimensional case. In other words, we prove the uniqueness of this inverse problem without any smallness assumption.
Abstract:
It is common for example in Cryo-electron microscopy of viruses, that the orientations at which the projections are acquired, are totally unknown. We introduce here a moment based algorithm for recovering them in the three-dimensional parallel beam tomography. In this context, there is likely to be also unknown shifts in the projections. They will be estimated simultaneously. Also stability properties of the algorithm are examined. Our considerations rely on recent results that guarantee a solution to be almost always unique. A similar analysis can also be done in the two-dimensional problem.
Abstract:
In this paper, we present an efficient algorithm for computing the Euclidean skeleton of an object directly from a point cloud representation on an underlying grid. The key point of this algorithm is to identify those grid points that are (approximately) on the skeleton using the closest point information of a grid point and its neighbors. The three main ingredients of the algorithm are: (1) computing closest point information efficiently on a grid, (2) identifying possible skeletal points based on the number of closest points of a grid point and its neighbors with smaller distances, (3) applying a distance ordered homotopic thinning process to remove the non-skeletal points while preserving the end points or the edge points of the skeleton. Computational examples in 2D and 3D are presented.
Abstract:
This note is devoted to a mathematical exploration of whether Lowe's
Scale-Invariant Feature Transform(SIFT)[21], a very successful image matching method, is similarity invariant as claimed. It is proved that the method is scale invariant only if the initial image blurs are exactly guessed. Yet, even a large error on the initial blur is quickly attenuated by this multiscale method, when the scale of analysis increases. In consequence, its scale invariance is almost perfect. The mathematical arguments are given under the assumption that the Gaussian smoothing performed by SIFT gives an aliasing free sampling of the image evolution. The validity of this main assumption is confirmed by a rigorous experimental procedure, and by a mathematical proof. These results explain why SIFT outperforms all other image feature extraction methods when it comes to scale invariance. Abstract:
This paper presents a level-set based approach for the simultaneous reconstruction and segmentation of the activity as well as the density distribution from tomography data gathered by an integrated SPECT/CT scanner.
Activity and density distributions are modeled as piecewise constant functions. The segmenting contours and the corresponding function values of both the activity and the density distribution are found as minimizers of a Mumford-Shah like functional over the set of admissible contours and -- for fixed contours -- over the spaces of piecewise constant density and activity distributions which may be discontinuous across their corresponding contours. For the latter step a Newton method is used to solve the nonlinear optimality system. Shape sensitivity calculus is used to find a descent direction for the cost functional with respect to the geometrical variables which leads to an update formula for the contours in the level-set framework. A heuristic approach for the insertion of new components for the activity as well as the density function is used. The method is tested for synthetic data with different noise levels.
Abstract:
We propose a new class of Gaussian priors,
correlation priors. In contrast to some well-known smoothness priors, they have stationary covariances. The correlation priors are given in a parametric form with two parameters: correlation power and correlation length. The first parameter is connected with our prior information on the variance of the unknown. The second parameter is our prior belief on how fast the correlation of the unknown approaches zero. Roughly speaking, the correlation length is the distance beyond which two points of the unknown may be considered independent.
The prior distribution is constructed to be essentially independent of the discretization so that the a posteriori distribution will be essentially independent of the discretization grid. The covariance of a discrete correlation prior may be formed by combining the Fisher information of a discrete white noise and different-order difference priors. This is interpreted as a combination of virtual measurements of the unknown. Closed-form expressions for the continuous limits are calculated. Also, boundary correction terms for correlation priors on finite intervals are given.
A numerical example, deconvolution with a Gaussian kernel and a correlation prior, is computed.
Abstract:
The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace's equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation.
Abstract:
In recent years considerable interest has been directed at devising obstacles which appear "invisible" to various types of wave propagation. That is; suppose for example we can construct an obstacle coated with an appropriate material such that when illuminated by an incident field (e.g plane wave) the wave scattered by the obstacle has zero cross section (equivalently radiation pattern)for all incident directions and frequencies then the obstacle appears to have no effect on the illuminating wave and the obstacle can be considered invisible. Such an electromagnetic cloaking device has been constructed by Schurig
et. al[18]. Motivated by recent work [1] concerning the problem of a parallel flow of particles falling on a body with piecewise smooth boundary and leaving no trace the analogous problem of acoustic scattering of a plane wave illuminating the same body, a gateway, is considered. It is shown that at high frequencies, with use of the Kirchoff approximation and the geometrical theory of diffraction, the scattered far field in a range of observed directions, e. g the back scattered direction, is zero for a discrete set of wave numbers. That is the gateway is acoustically "invisible" in that direction. Abstract:
We consider the problem of minimizing the functional $\int_\Omega a|\nabla u|dx$, with $u$ in some appropriate Banach space and prescribed trace $f$ on the boundary. For $a\in L^2(\Omega)$ and $u$ in the sample space $H^1(\Omega)$, this problem appeared recently in imaging the electrical conductivity of a body when some interior data are available. When $a\in C(\Omega)\cap L^\infty(\Omega)$, the functional has a natural interpretation, which suggests that one should consider the minimization problem in the sample space $BV(\Omega)$. We show the stability of the minimum value with respect to $a$, in a neighborhood of a particular coefficient. In both cases the method of proof provides some convergent minimizing procedures. We also consider the minimization problem for the non-degenerate functional $\int_\Omega a\max\{|\nabla u|,\delta\}dx$, for some $\delta>0$, and prove a stability result. Again, the method of proof constructs a minimizing sequence and we identify sufficient conditions for convergence. We apply the last result to the conductivity problem and show that, under an a posteriori smoothness condition, the method recovers the unknown conductivity.
Abstract:
Recently augmented Lagrangian method has been successfully applied to image restoration. We extend the method to total variation (TV) restoration models with non-quadratic fidelities. We will first introduce the method and present an iterative algorithm for TV restoration with a quite general fidelity. In each iteration, three sub-problems need to be solved, two of which can be very efficiently solved via Fast Fourier Transform (FFT) implementation or closed form solution. In general the third sub-problem need iterative solvers. We then apply our method to TV restoration with $L^1$ and Kullback-Leibler (KL) fidelities, two common and important data terms for deblurring images corrupted by impulsive noise and Poisson noise, respectively. For these typical fidelities, we show that the third sub-problem also has closed form solution and thus can be efficiently solved. In addition, convergence analysis of these algorithms are given. Numerical experiments demonstrate the efficiency of our method.
Abstract:
We propose an inviscid model for nonrigid image registration in a particle framework, and derive the corresponding nonlinear partial differential equations for computing the spatial transformation. Our idea is to simulate the template image as a set of free particles moving toward the target positions under applied forces. Our model can accommodate both small and large deformations, with sharper edges and clear texture achieved at less computational cost. We demonstrate the performance of our model on a variety of images including 2D and 3D, mono-modal and multi-modal images.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Definition:Norm/Vector Space Definition $\norm{\,\cdot\,}: V \to \R_{\ge 0}$
satisfying the (vector space) norm axioms:
\((N1)\) $:$ Positive definiteness: \(\displaystyle \forall x \in V:\) \(\displaystyle \norm x = 0 \) \(\displaystyle \iff \) \(\displaystyle x = \mathbf 0_V \) \((N2)\) $:$ Positive homogeneity: \(\displaystyle \forall x \in V, \lambda \in R:\) \(\displaystyle \norm {\lambda x} \) \(\displaystyle = \) \(\displaystyle \norm {\lambda}_R \times \norm x \) \((N3)\) $:$ Triangle inequality: \(\displaystyle \forall x, y \in V:\) \(\displaystyle \norm {x + y} \) \(\displaystyle \le \) \(\displaystyle \norm x + \norm y \)
Let $\norm {\,\cdot\,}$ be a norm on $V$.
Then $\struct {V, \norm {\,\cdot\,} }$ is a normed vector space. $\norm {\,\cdot\,}: R \to \R_{\ge 0}$
satisfying the
(ring) multiplicative norm axioms:
\((N1)\) $:$ Positive Definiteness: \(\displaystyle \forall x \in R:\) \(\displaystyle \norm x = 0 \) \(\displaystyle \iff \) \(\displaystyle x = 0_R \) \((N2)\) $:$ Multiplicativity: \(\displaystyle \forall x, y \in R:\) \(\displaystyle \norm {x \circ y} \) \(\displaystyle = \) \(\displaystyle \norm x \times \norm y \) \((N3)\) $:$ Triangle Inequality: \(\displaystyle \forall x, y \in R:\) \(\displaystyle \norm {x + y} \) \(\displaystyle \le \) \(\displaystyle \norm x + \norm y \) Notes
However, the definition given here incorporates this approach.
Also known as
The term
length is occasionally seen as an alternative for norm. |
OpenCV 4.0.1
Open Source Computer Vision
GMat cv::gapi::concatHor (const GMat &src1, const GMat &src2) Applies horizontal concatenation to given matrices. More... GMat cv::gapi::concatHor (const std::vector< GMat > &v) GMat cv::gapi::concatVert (const GMat &src1, const GMat &src2) Applies vertical concatenation to given matrices. More... GMat cv::gapi::concatVert (const std::vector< GMat > &v) GMat cv::gapi::convertTo (const GMat &src, int rdepth, double alpha=1, double beta=0) Converts a matrix to another data depth with optional scaling. More... GMat cv::gapi::crop (const GMat &src, const Rect &rect) Crops a 2D matrix. More... GMat cv::gapi::flip (const GMat &src, int flipCode) Flips a 2D matrix around vertical, horizontal, or both axes. More... GMat cv::gapi::LUT (const GMat &src, const Mat &lut) Performs a look-up table transform of a matrix. More... GMat cv::gapi::LUT3D (const GMat &src, const GMat &lut3D, int interpolation=INTER_NEAREST) Performs a 3D look-up table transform of a multi-channel matrix. More... GMat cv::gapi::merge3 (const GMat &src1, const GMat &src2, const GMat &src3) GMat cv::gapi::merge4 (const GMat &src1, const GMat &src2, const GMat &src3, const GMat &src4) Creates one 3-channel (4-channel) matrix out of 3(4) single-channel ones. More... GMat cv::gapi::remap (const GMat &src, const Mat &map1, const Mat &map2, int interpolation, int borderMode=BORDER_CONSTANT, const Scalar &borderValue=Scalar()) Applies a generic geometrical transformation to an image. More... GMat cv::gapi::resize (const GMat &src, const Size &dsize, double fx=0, double fy=0, int interpolation=INTER_LINEAR) Resizes an image. More... std::tuple< GMat, GMat, GMat > cv::gapi::split3 (const GMat &src) std::tuple< GMat, GMat, GMat, GMat > cv::gapi::split4 (const GMat &src) Divides a 3-channel (4-channel) matrix into 3(4) single-channel matrices. More...
Applies horizontal concatenation to given matrices.
The function horizontally concatenates two GMat matrices (with the same number of rows).
src1 first input matrix to be considered for horizontal concatenation. src2 second input matrix to be considered for horizontal concatenation.
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. The function horizontally concatenates given number of GMat matrices (with the same number of columns). Output matrix must the same number of columns and depth as the input matrices, and the sum of rows of input matrices.
v vector of input matrices to be concatenated horizontally.
Applies vertical concatenation to given matrices.
The function vertically concatenates two GMat matrices (with the same number of cols).
src1 first input matrix to be considered for vertical concatenation. src2 second input matrix to be considered for vertical concatenation.
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. The function vertically concatenates given number of GMat matrices (with the same number of columns). Output matrix must the same number of columns and depth as the input matrices, and the sum of rows of input matrices.
v vector of input matrices to be concatenated vertically.
Converts a matrix to another data depth with optional scaling.
The method converts source pixel values to the target data depth. saturate_cast<> is applied at the end to avoid possible overflows:
\[m(x,y) = saturate \_ cast<rType>( \alpha (*this)(x,y) + \beta )\]
Output matrix must be of the same size as input one.
src input matrix to be converted from. rdepth desired output matrix depth or, rather, the depth since the number of channels are the same as the input has; if rdepth is negative, the output matrix will have the same depth as the input. alpha optional scale factor. beta optional delta added to the scaled values.
Flips a 2D matrix around vertical, horizontal, or both axes.
The function flips the matrix in one of three different ways (row and column indices are 0-based):
\[\texttt{dst} _{ij} = \left\{ \begin{array}{l l} \texttt{src} _{\texttt{src.rows}-i-1,j} & if\; \texttt{flipCode} = 0 \\ \texttt{src} _{i, \texttt{src.cols} -j-1} & if\; \texttt{flipCode} > 0 \\ \texttt{src} _{ \texttt{src.rows} -i-1, \texttt{src.cols} -j-1} & if\; \texttt{flipCode} < 0 \\ \end{array} \right.\]
The example scenarios of using the function are the following: Vertical flipping of the image (flipCode == 0) to switch between top-left and bottom-left image origin. This is a typical operation in video processing on Microsoft Windows* OS. Horizontal flipping of the image with the subsequent horizontal shift and absolute difference calculation to check for a vertical-axis symmetry (flipCode > 0). Simultaneous horizontal and vertical flipping of the image with the subsequent shift and absolute difference calculation to check for a central symmetry (flipCode < 0). Reversing the order of point arrays (flipCode > 0 or flipCode == 0). Output image must be of the same depth as input one, size should be correct for given flipCode.
src input matrix. flipCode a flag to specify how to flip the array; 0 means flipping around the x-axis and positive value (for example, 1) means flipping around y-axis. Negative value (for example, -1) means flipping around both axes.
Performs a look-up table transform of a matrix.
The function LUT fills the output matrix with values from the look-up table. Indices of the entries are taken from the input matrix. That is, the function processes each element of src as follows:
\[\texttt{dst} (I) \leftarrow \texttt{lut(src(I))}\]
Supported matrix data types are CV_8UC1. Output is a matrix of the same size and number of channels as src, and the same depth as lut.
src input matrix of 8-bit elements. lut look-up table of 256 elements; in case of multi-channel input array, the table should either have a single channel (in this case the same table is used for all channels) or the same number of channels as in the input matrix.
Performs a 3D look-up table transform of a multi-channel matrix.
The function LUT3D fills the output matrix with values from the look-up table. Indices of the entries are taken from the input matrix. Interpolation is applied for mapping 0-255 range values to 0-16 range of 3DLUT table. The function processes each element of src as follows:
where ~ means approximation. Output is a matrix of of CV_8UC3.
src input matrix of CV_8UC3. lut3D look-up table 17x17x17 3-channel elements. interpolation The depth of interpoolation to be used.
GMat cv::gapi::merge4 ( const GMat & src1, const GMat & src2, const GMat & src3, const GMat & src4 )
Creates one 3-channel (4-channel) matrix out of 3(4) single-channel ones.
The function merges several matrices to make a single multi-channel matrix. That is, each element of the output matrix will be a concatenation of the elements of the input matrices, where elements of i-th input matrix are treated as mv[i].channels()-element vectors. Input matrix must be of CV_8UC3 (CV_8UC4) type.
The function split3/split4 does the reverse operation.
src1 first input matrix to be merged src2 second input matrix to be merged src3 third input matrix to be merged src4 fourth input matrix to be merged
GMat cv::gapi::remap ( const GMat & src, const Mat & map1, const Mat & map2, int interpolation, int borderMode =
BORDER_CONSTANT,
const Scalar & borderValue =
Scalar()
)
Applies a generic geometrical transformation to an image.
The function remap transforms the source image using the specified map:
\[\texttt{dst} (x,y) = \texttt{src} (map_x(x,y),map_y(x,y))\]
where values of pixels with non-integer coordinates are computed using one of available interpolation methods. \(map_x\) and \(map_y\) can be encoded as separate floating-point maps in \(map_1\) and \(map_2\) respectively, or interleaved floating-point maps of \((x,y)\) in \(map_1\), or fixed-point maps created by using convertMaps. The reason you might want to convert from floating to fixed-point representations of a map is that they can yield much faster (2x) remapping operations. In the converted case, \(map_1\) contains pairs (cvFloor(x), cvFloor(y)) and \(map_2\) contains indices in a table of interpolation coefficients. Output image must be of the same size and depth as input one.
src Source image. map1 The first map of either (x,y) points or just x values having the type CV_16SC2, CV_32FC1, or CV_32FC2. map2 The second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively. interpolation Interpolation method (see cv::InterpolationFlags). The method INTER_AREA is not supported by this function. borderMode Pixel extrapolation method (see cv::BorderTypes). When borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the "outliers" in the source image are not modified by the function. borderValue Value used in case of a constant border. By default, it is 0.
GMat cv::gapi::resize ( const GMat & src, const Size & dsize, double fx =
0,
double fy =
0,
int interpolation =
INTER_LINEAR
)
Resizes an image.
The function resizes the image src down to or up to the specified size.
Output image size will have the size dsize (when dsize is non-zero) or the size computed from src.size(), fx, and fy; the depth of output is the same as of src.
If you want to resize src so that it fits the pre-created dst, you may call the function as follows:
If you want to decimate the image by factor of 2 in each direction, you can call the function this way:
To shrink an image, it will generally look best with cv::INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with cv::INTER_CUBIC (slow) or cv::INTER_LINEAR (faster but still looks OK).
src input image. dsize output image size; if it equals zero, it is computed as:
\[\texttt{dsize = Size(round(fx*src.cols), round(fy*src.rows))}\]Either dsize or both fx and fy must be non-zero.
fx scale factor along the horizontal axis; when it equals 0, it is computed as
\[\texttt{(double)dsize.width/src.cols}\]
fy scale factor along the vertical axis; when it equals 0, it is computed as
\[\texttt{(double)dsize.height/src.rows}\]
interpolation interpolation method, see cv::InterpolationFlags
Divides a 3-channel (4-channel) matrix into 3(4) single-channel matrices.
The function splits a 3-channel (4-channel) matrix into 3(4) single-channel matrices:
\[\texttt{mv} [c](I) = \texttt{src} (I)_c\]
All output matrices must be in CV_8UC1. |
Difference between revisions of "Vopenka"
(Added not necessarily weakly compact and external links section.)
Line 30: Line 30:
==External links==
==External links==
− + +
* [http://mathoverflow.net/questions/45602/can-vopenkas-principle-be-violated-definably Math Overflow question and answer about formalisations]
{{References}}
{{References}}
Revision as of 17:58, 23 July 2013 Vopěnka's principle is a large cardinal axiom at the upper end of the large cardinal hierarchy that is particularly notable for its applications to category theory. In a set theoretic setting, the most common definition is the following:
For any language $\mathcal{L}$ and any proper class $C$ of $\mathcal{L}$-structures, there are distinct structures $M, N\in C$ and an elementary embedding $j:M\to N$.
For example, taking $\mathcal{L}$ to be the language with one unary and one binary predicate, we can consider for any ordinal $\eta$ the class of structures $\langle V_{\alpha+\eta},\{\alpha\},\in\rangle$, and conclude from Vopěnka's principle that a cardinal that is at least $\eta$-extendible exists. In fact if Vopěnka's principle holds then there are a proper class of extendible cardinals; bounding the strength of the axiom from above, we have that if $\kappa$ is almost huge, then $V_\kappa$ satisfies Vopěnka's principle.
Contents Formalisations
As stated above and from the point of view of ZFC, this is actually an axiom schema, as we quantify over proper classes, which from a purely ZFC perspective means definable proper classes.One alternative is to view Vopěnka's principle as an axiom in a class theory, such as von Neumann-Gödel-Bernays. Another is to consider a
Vopěnka cardinal, that is, a cardinal$\kappa$ that is inaccessible and such that $V_\kappa$ satisfies Vopěnka's principle when "proper class" is taken to mean "subset of $V_\kappa$ of cardinality $\kappa$.These three alternatives are, in the order listed, strictly increasing in strength [1]. Equivalent statements
The schema form of Vopěnka's principle is equivalent to the existence of a proper class of $C^{(n)}$-extendible cardinals for every $n$; indeed there is a level-by-level stratification of Vopěnka's principle, with Vopěnka's principle for a $\Sigma_{n+2}$-definable class corresponds to the existence of a $C^{(n)}$-extendible cardinal greater than the ranks of the parameters. [1]
Other points to note
Whilst Vopěnka cardinals are very strong in terms of consistency strength, a Vopěnka cardinal need not even be weakly compact. Indeed, the definition of a Vopěnka cardinal is a $\Pi^1_1$ statement over $V_\kappa$, and $\Pi^1_1$ indescribability is one of the equivalent definitions of weak compactness. Thus, the least weakly compact Vopěnka cardinal must have (many) other Vopěnka cardinals less than it.
References Bagaria, Joan and Casacuberta, Carles and Mathias, A R D and Rosický, Jiří. Definable orthogonality classes in accessible categories are small.Journal of the European Mathematical Society 17(3):549--589. arχiv bibtex |
Global asymptotic stability in a chemotaxis-growth model for tumor invasion
Department of Mathematics, Tokyo University of Science, Tokyo 162-8601, Japan
$ \left\{ \begin{array}{l} u_t = \Delta u - \nabla \cdot (u\nabla v) + ru -\mu u^\alpha, \qquad x\in \Omega, \ t>0, \\ \ v_t = \Delta v + wz, \qquad x\in \Omega, \ t>0, \\ \ w_t = -wz, \qquad x\in \Omega, \ t>0, \\ \ z_t = \Delta z - z + u, \qquad x\in \Omega, \ t>0, \end{array} \right. $
$ \Omega \subset \mathbb{R}^n $
$ n \le 3 $
$ r>0 $
$ \mu>0 $
$ \alpha>1 $
$ ru-\mu u^\alpha $ Mathematics Subject Classification:Primary: 35B40, 35Q92; Secondary: 92C17. Citation:Kentarou Fujie. Global asymptotic stability in a chemotaxis-growth model for tumor invasion. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020011
References:
[1]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[2]
X. Cao,
Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces,
[3]
M. A. J. Chaplain and A. R. A. Anderson, Mathematical modelling of tissue invasion, in
[4]
E. Feireisl, P. Laurençot and H. Petzeltová,
On convergence to equilibria for the Keller-Segel chemotaxis model,
[5] [6]
K. Fujie, A. Ito and T. Yokota,
Existence and uniqueness of local classical solutions to modified tumor invasion models of Chaplain-Anderson type,
[7] [8]
K. Fujie and T. Senba,
Application of an Adams type inequality to a two-chemical substances chemotaxis system,
[9] [10] [11] [12]
B. Hu and Y. Tao,
To the exclusion of blow-up in a three-dimensional chemotaxis-growth model with indirect attractant production,
[13]
K. Kang, A. Stevens and J. J. L. Velázquez,
Qualitative behavior of a Keller-Segel model with non-diffusive memory,
[14] [15]
O. A. Ladyzenskaja, V. A. Solonnikov and N. N. Ural'ceva,
[16]
J. Lankeit,
Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source,
[17]
G. Liţcanu and C. Morales-Rodrigo,
Asymptotic behaviour of global solutions to a model of cell invasion,
[18] [19]
C. Morales-Rodrigo,
Local existence and uniqueness of regular solutions in a model of tissue invasion by solid tumours,
[20]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[21] [22]
Z. Szymańska, C. Morales-Rodrigo, M. Lachowicz and M. A. J. Chaplain,
Mathematical modelling of cancer invasion of tissue: The role and effect of nonlocal interactions,
[23] [24]
Y. Tao and M. Winkler,
Critical mass for infinite-time aggregation in a chemotaxis model with indirect signal production,
[25] [26] [27] [28]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[29]
M. Winkler,
Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system,
[30]
M. Winkler,
Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening,
show all references
References:
[1]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[2]
X. Cao,
Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces,
[3]
M. A. J. Chaplain and A. R. A. Anderson, Mathematical modelling of tissue invasion, in
[4]
E. Feireisl, P. Laurençot and H. Petzeltová,
On convergence to equilibria for the Keller-Segel chemotaxis model,
[5] [6]
K. Fujie, A. Ito and T. Yokota,
Existence and uniqueness of local classical solutions to modified tumor invasion models of Chaplain-Anderson type,
[7] [8]
K. Fujie and T. Senba,
Application of an Adams type inequality to a two-chemical substances chemotaxis system,
[9] [10] [11] [12]
B. Hu and Y. Tao,
To the exclusion of blow-up in a three-dimensional chemotaxis-growth model with indirect attractant production,
[13]
K. Kang, A. Stevens and J. J. L. Velázquez,
Qualitative behavior of a Keller-Segel model with non-diffusive memory,
[14] [15]
O. A. Ladyzenskaja, V. A. Solonnikov and N. N. Ural'ceva,
[16]
J. Lankeit,
Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source,
[17]
G. Liţcanu and C. Morales-Rodrigo,
Asymptotic behaviour of global solutions to a model of cell invasion,
[18] [19]
C. Morales-Rodrigo,
Local existence and uniqueness of regular solutions in a model of tissue invasion by solid tumours,
[20]
K. Osaki, T. Tsujikawa, A. Yagi and M. Mimura,
Exponential attractor for a chemotaxis-growth system of equations,
[21] [22]
Z. Szymańska, C. Morales-Rodrigo, M. Lachowicz and M. A. J. Chaplain,
Mathematical modelling of cancer invasion of tissue: The role and effect of nonlocal interactions,
[23] [24]
Y. Tao and M. Winkler,
Critical mass for infinite-time aggregation in a chemotaxis model with indirect signal production,
[25] [26] [27] [28]
M. Winkler,
Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source,
[29]
M. Winkler,
Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system,
[30]
M. Winkler,
Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening,
[1]
Kentarou Fujie, Akio Ito, Michael Winkler, Tomomi Yokota.
Stabilization in a chemotaxis model for tumor invasion.
[2]
Yuanyuan Liu, Youshan Tao.
Asymptotic behavior in a chemotaxis-growth system with nonlinear production of signals.
[3]
Janet Dyson, Eva Sánchez, Rosanna Villella-Bressan, Glenn F. Webb.
An age and spatially structured model of tumor invasion with haptotaxis.
[4]
Junde Wu, Shangbin Cui.
Asymptotic behavior of solutions for parabolic differential
equations with invariance and applications to a free boundary problem
modeling tumor growth.
[5]
Marco Di Francesco, Alexander Lorz, Peter A. Markowich.
Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion:
Global existence and asymptotic behavior.
[6]
Risei Kano.
The existence of solutions for tumor invasion models with time and space dependent diffusion.
[7]
Gülnihal Meral, Christian Stinner, Christina Surulescu.
On a multiscale model involving cell contractivity and its effects on tumor invasion.
[8] [9] [10]
Youshan Tao, J. Ignacio Tello.
Nonlinear stability of a heterogeneous state in a PDE-ODE
model for acid-mediated tumor invasion.
[11]
Sandesh Athni Hiremath, Christina Surulescu, Anna Zhigun, Stefanie Sonner.
On a coupled SDE-PDE system modeling acid-mediated tumor invasion.
[12] [13] [14]
Akisato Kubo, Hiroki Hoshino, Katsutaka Kimura.
Global existence and asymptotic behaviour of solutions for nonlinear evolution equations related to a tumour invasion model.
[15]
Doan Duy Hai, Atsushi Yagi.
Longtime behavior of solutions to chemotaxis-proliferation model with three variables.
[16]
Harald Garcke, Kei Fong Lam.
Analysis of a Cahn--Hilliard system with non-zero Dirichlet conditions modeling tumor growth with chemotaxis.
[17]
Chunpeng Wang.
Boundary behavior and asymptotic behavior of solutions
to a class of parabolic equations with boundary degeneracy.
[18] [19] [20]
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top] |
Difference between revisions of "Weakly compact"
(Many Minor Fixes)
Line 61: Line 61:
A transitive set $M$ is a $\kappa$-model of set theory if
A transitive set $M$ is a $\kappa$-model of set theory if
−
$|M|=\kappa$, $M^{\lt\kappa}\subset M$ and $M$ satisfies
+
$|M|=\kappa$, $M^{\lt\kappa}\subset M$ and $M$ satisfies ZFC^-$,
the theory ZFC without the power set axiom (and using collection and separation rather than merely replacement).
the theory ZFC without the power set axiom (and using collection and separation rather than merely replacement).
For any
For any
infinite cardinal $\kappa$ we have
infinite cardinal $\kappa$ we have
−
$H_{\kappa^+}
+
$H_{\kappa^+}models ZFC^-$, and further, if
$M\prec H_{\kappa^+}$ and $\kappa\subset M$, then $M$ is
$M\prec H_{\kappa^+}$ and $\kappa\subset M$, then $M$ is
transitive. Thus, any $A\in H_{\kappa^+}$ can be placed
transitive. Thus, any $A\in H_{\kappa^+}$ can be placed
Revision as of 12:05, 11 November 2017
Weakly compact cardinals lie at the focal point of a number of diverse concepts in infinite combinatorics, admitting various characterizations in terms of these concepts. If $\kappa^{{<}\kappa} = \kappa$, then the following are equivalent:
Weak compactness A cardinal $\kappa$ is weakly compact if and only if it is uncountable and every $\kappa$-satisfiable theory in an $\mathcal{L}_{\kappa,\kappa}$ language of size at most $\kappa$ is satisfiable. Extension property A cardinal $\kappa$ is weakly compact if and only if for every $A\subset V_\kappa$, there is a transitive structure $W$ properly extending $V_\kappa$ and $A^*\subset W$ such that $\langle V_\kappa,{\in},A\rangle\prec\langle W,{\in},A^*\rangle$. Tree property A cardinal $\kappa$ is weakly compact if and only if it is inaccessible and has the tree property. Filter property A cardinal $\kappa$ is weakly compact if and only if whenever $M$ is a set containing at most $\kappa$-many subsets of $\kappa$, then there is a $\kappa$-complete nonprincipal filter $F$ measuring every set in $M$. Weak embedding property A cardinal $\kappa$ is weakly compact if and only if for every $A\subset\kappa$ there is a transitive set $M$ of size $\kappa$ with $\kappa\in M$ and a transitive set $N$ with an embedding $j:M\to N$ with critical point $\kappa$. Embedding characterization A cardinal $\kappa$ is weakly compact if and only if for every transitive set $M$ of size $\kappa$ with $\kappa\in M$ there is a transitive set $N$ and an embedding $j:M\to N$ with critical point $\kappa$. Normal embedding characterization A cardinal $\kappa$ is weakly compact if and only if for every $\kappa$-model $M$ there is a $\kappa$-model $N$ and an embedding $j:M\to N$ with critical point $\kappa$, such that $N=\{\ j(f)(\kappa)\mid f\in M\ \}$. Hauser embedding characterization A cardinal $\kappa$ is weakly compact if and only if for every $\kappa$-model $M$ there is a $\kappa$-model $N$ and an embedding $j:M\to N$ with critical point $\kappa$ such that $j,M\in N$. Partition property A cardinal $\kappa$ is weakly compact if and only if it enjoys the partition property $\kappa\to(\kappa)^2_2$. Indescribability property A cardinal $\kappa$ is weakly compact if and only if it is $\Pi_1^1$-indescribable.
Weakly compact cardinals first arosein connection with (and were named for) the question ofwhether certain infinitary logics satisfy the compactnesstheorem of first order logic. Specifically, in a languagewith a signature consisting, as in the first order context,of a set of constant, finitary function and relationsymbols, we build up the language of $\mathcal{L}_{\kappa,\lambda}$formulas by closing the collection of formulas underinfinitary conjunctions$\wedge_{\alpha<\delta}\varphi_\alpha$ and disjunctions$\vee_{\alpha<\delta}\varphi_\alpha$ of any size$\delta<\kappa$, as well as infinitary quantification$\exists\vec x$ and $\forall\vec x$ over blocks ofvariables $\vec x=\langle x_\alpha\mid\alpha<\delta\rangle$ of size lessthan $\kappa$. A theory in such a language is
satisfiable if it has a model under the natural semantics.A theory is $\theta$-satisfiable if every subtheoryconsisting of fewer than $\theta$ many sentences of it issatisfiable. First order logic is precisely$L_{\omega,\omega}$, and the classical Compactness theoremasserts that every $\omega$-satisfiable $\mathcal{L}_{\omega,\omega}$theory is satisfiable. A uncountable cardinal $\kappa$ is strongly compact if every $\kappa$-satisfiable$\mathcal{L}_{\kappa,\kappa}$ theory is satisfiable. The cardinal$\kappa$ is weakly compact if every$\kappa$-satisfiable $\mathcal{L}_{\kappa,\kappa}$ theory, in alanguage having at most $\kappa$ many constant, functionand relation symbols, is satisfiable.
Next, for any cardinal $\kappa$, a
$\kappa$-tree is atree of height $\kappa$, all of whose levels have size lessthan $\kappa$. More specifically, $T$ is a tree if$T$ is a partial order such that the predecessors of anynode in $T$ are well ordered. The $\alpha^{\rm th}$ level of atree $T$, denoted $T_\alpha$, consists of the nodes whosepredecessors have order type exactly $\alpha$, and thesenodes are also said to have height $\alpha$. The height of the tree $T$ is the first $\alpha$ for which $T$has no nodes of height $\alpha$. A ""$\kappa$-branch""through a tree $T$ is a maximal linearly ordered subset of$T$ of order type $\kappa$. Such a branch selects exactlyone node from each level, in a linearly ordered manner. Theset of $\kappa$-branches is denoted $[T]$. A $\kappa$-treeis an Aronszajn tree if it has no $\kappa$-branches.A cardinal $\kappa$ has the tree property if every$\kappa$-tree has a $\kappa$-branch.
A transitive set $M$ is a $\kappa$-model of set theory if $|M|=\kappa$, $M^{\lt\kappa}\subset M$ and $M$ satisfies ZFC$^-$, the theory ZFC without the power set axiom (and using collection and separation rather than merely replacement). For any infinite cardinal $\kappa$ we have that $H_{\kappa^+}$ models ZFC$^-$, and further, if $M\prec H_{\kappa^+}$ and $\kappa\subset M$, then $M$ is transitive. Thus, any $A\in H_{\kappa^+}$ can be placed into such an $M$. If $\kappa^{\lt\kappa}=\kappa$, one can use the downward Löwenheim-Skolem theorem to find such $M$ with $M^{\lt\kappa}\subset M$. So in this case there are abundant $\kappa$-models of set theory (and conversely, if there is a $\kappa$-model of set theory, then $2^{\lt\kappa}=\kappa$).
The partition property $\kappa\to(\lambda)^n_\gamma$asserts that for every function $F:[\kappa]^n\to\gamma$there is $H\subset\kappa$ with $|H|=\lambda$ such that$F\upharpoonright[H]^n$ is constant. If one thinks of $F$ ascoloring the $n$-tuples, the partition property asserts theexistence of a
monochromatic set $H$, since alltuples from $H$ get the same color. The partition property$\kappa\to(\kappa)^2_2$ asserts that every partition of$[\kappa]^2$ into two sets admits a set $H\subset\kappa$ ofsize $\kappa$ such that $[H]^2$ lies on one side of thepartition. When defining $F:[\kappa]^n\to\gamma$, we define$F(\alpha_1,\ldots,\alpha_n)$ only when$\alpha_1<\cdots<\alpha_n$. Contents Weakly compact cardinals and the constructible universe
Nevertheless, the weak compactness property is not generally downward absolute between transitive models of set theory.
Weakly compact cardinals and forcing Weakly compact cardinals are invariant under small forcing. [1] Weakly compact cardinals are preserved by the canonical forcing of the GCH, by fast function forcing and many other forcing notions [ citation needed]. If $\kappa$ is weakly compact, there is a forcing extension in which $\kappa$ remains weakly compact and $2^\kappa\gt\kappa$ [ citation needed]. If the existence of weakly compact cardinals is consistent with ZFC, then there is a model of ZFC in which $\kappa$ is not weakly compact, but becomes weakly compact in a forcing extension [2]. Indestructibility of a weakly compact cardinal To expand using [2] Relations with other large cardinals Every weakly compact cardinal is inaccessible, Mahlo, hyper-Mahlo, hyper-hyper-Mahlo and more. Measurable cardinals, Ramsey cardinals, and totally indescribable cardinals are all weakly compact and a stationary limit of weakly compact cardinals. Assuming the consistency of a strongly unfoldable cardinal with ZFC, it is also consistent for the least weakly compact cardinal to be the least unfoldable cardinal. [3] If GCH holds, then the least weakly compact cardinal is not weakly measurable. However, if there is a measurable cardinal, then it is consistent for the least weakly compact cardinal to be weakly measurable. [3] If there is a $\kappa$ which is nearly $\theta$-supercompact where $\theta^{<\kappa}=\theta$, then it is consistent for the least weakly compact cardinal to be nearly $\theta$-supercompact. [3]
References Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory. Kunen, Kenneth. Saturated Ideals.J Symbolic Logic 43(1):65--76, 1978. www bibtex Cody, Brent, Gitik, Moti, Hamkins, Joel David, and Schanker, Jason. The Least Weakly Compact Cardinal Can Be Unfoldable, Weakly Measurable and Nearly θ-Supercompact., 2013. arχiv bibtex |
1Department of Computer and Inst. Tech. Edu., Faculty of Education, Agri Ibrahim Cecen University, Agri, Turkey
2Department of Mathematics, Faculty of Science, Ege University, 35100 Bornova-Izmir, Turkey
Abstract
An exponential dominating set of graph $G = (V,E )$ is a subset $S\subseteq V(G)$ such that $\sum_{u\in S}(1/2)^{\overline{d}{(u,v)-1}}\geq 1$ for every vertex $v$ in $V(G)-S$, where $\overline{d}(u,v)$ is the distance between vertices $u \in S$ and $v \in V(G)-S$ in the graph $G -(S-\{u\})$. The exponential domination number, $\gamma_{e}(G)$, is the smallest cardinality of an exponential dominating set. Graph operations are important methods for constructing new graphs, and they play key roles in the design and analysis of networks. In this study, we consider the exponential domination number of graph operations including edge corona, neighborhood corona and power. |
Boundedness in a quasilinear fully parabolic Keller-Segel system via maximal Sobolev regularity
1.
Department of Mathematics and Informatics, Graduate School of Science, Chiba University, 1-33, Yayoi-cho, Inage, Chiba 263-8522, Japan
2.
Department of Mathematics, Tokyo University of Science, 1-3, Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan
$ \begin{align*} \begin{cases} u_t = \nabla\cdot(D(u)\nabla u)-\nabla\cdot(S(u)\nabla v), &x \in \Omega, \ t>0, \\ \ v_t = \Delta v - v +u, &x \in \Omega, \ t>0 \end{cases} \end{align*} $
$ \Omega = \mathbb{R}^N $
$ \Omega\subset \mathbb{R}^N $
$ u_0\in L^1(\Omega) \cap L^\infty(\Omega) $
$ v_0\in L^1(\Omega) \cap W^{1, \infty}(\Omega) $
$ \Omega $
$ D(u) $
$ S(u) $
$ D(u)\ge u^{m-1}\ (m\geq1) $
$ S(u)\leq u^{q-1}\ (q\geq 2) $
$ q<m+\frac{2}{N} $
$ \Omega = \mathbb{R}^N $ J. Differential Equations2012; 252:1421-1440). The proof is based on the maximal Sobolev regularity for the second equation. This also simplifies a previous proof given by Tao-Winkler ( J. Differential Equations2012; 252:692-715) in the case of bounded domains. Mathematics Subject Classification:Primary: 35K51; Secondary: 35B35. Citation:Sachiko Ishida, Tomomi Yokota. Boundedness in a quasilinear fully parabolic Keller-Segel system via maximal Sobolev regularity. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020012
References:
[1] [2]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[3] [4] [5]
T. Ciéslak and C. Stinner,
Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions,
[6] [7]
M. Hieber and J. Prüss,
Heat kernels and maximal
L estimates for parabolic evolution equations, q Comm. Partial Differential Equations, 22 (1997), 1647-1669.
doi: 10.1080/03605309708821314.
Google Scholar
[8] [9] [10]
S. Ishida, Y. Maeda and T. Yokota,
Gradient estimate for solutions to quasilinear non-degenerate Keller-Segel systems on $\mathbb{R}^N$,
[11]
S. Ishida, T. Ono and T. Yokota,
Possibility of the existence of blow-up solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type,
[12]
S. Ishida, K. Seki and T. Yokota,
Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains,
[13]
S. Ishida and T. Yokota,
Global existence of weak solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type,
[14]
S. Ishida and T. Yokota,
Global existence of weak solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type with small data,
[15]
S. Ishida and T. Yokota, Remaks on the global existence of weak solutions to quasilinear degenerate Keller-Segel systems,
[16]
S. Ishida and T. Yokota,
Blow-up in finite or infinite time for quasilinear degenerate Keller-Segel systems of parabolic-parabolic type,
[17] [18]
S. Kim and K.-A. Lee,
Hölder regularity and uniqueness theorem on weak solutions to the degenerate Keller-Segel system,
[19]
O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva,
[20]
M. Miura and Y. Sugiyama,
On uniqueness theorem on weak solutions to the parabolic-parabolic Keller-Segel system of degenerate and singular types,
[21] [22] [23] [24]
Y. Sugiyama and H. Kunii,
Global existence and decay properties for a degenerate Keller-Segel model with a power factor in drift term,
[25]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[26]
P. Weidemaier,
Maximal regularity for parabolic equations with inhomogeneous boundary conditions in Sobolev spaces with mixed
Electron. Res. Announc. Amer. Math. Soc., 8 (2002), 47-51.
doi: 10.1090/S1079-6762-02-00104-X.
Google Scholar
[27] [28]
show all references
References:
[1] [2]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues,
[3] [4] [5]
T. Ciéslak and C. Stinner,
Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions,
[6] [7]
M. Hieber and J. Prüss,
Heat kernels and maximal
L estimates for parabolic evolution equations, q Comm. Partial Differential Equations, 22 (1997), 1647-1669.
doi: 10.1080/03605309708821314.
Google Scholar
[8] [9] [10]
S. Ishida, Y. Maeda and T. Yokota,
Gradient estimate for solutions to quasilinear non-degenerate Keller-Segel systems on $\mathbb{R}^N$,
[11]
S. Ishida, T. Ono and T. Yokota,
Possibility of the existence of blow-up solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type,
[12]
S. Ishida, K. Seki and T. Yokota,
Boundedness in quasilinear Keller-Segel systems of parabolic-parabolic type on non-convex bounded domains,
[13]
S. Ishida and T. Yokota,
Global existence of weak solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type,
[14]
S. Ishida and T. Yokota,
Global existence of weak solutions to quasilinear degenerate Keller-Segel systems of parabolic-parabolic type with small data,
[15]
S. Ishida and T. Yokota, Remaks on the global existence of weak solutions to quasilinear degenerate Keller-Segel systems,
[16]
S. Ishida and T. Yokota,
Blow-up in finite or infinite time for quasilinear degenerate Keller-Segel systems of parabolic-parabolic type,
[17] [18]
S. Kim and K.-A. Lee,
Hölder regularity and uniqueness theorem on weak solutions to the degenerate Keller-Segel system,
[19]
O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva,
[20]
M. Miura and Y. Sugiyama,
On uniqueness theorem on weak solutions to the parabolic-parabolic Keller-Segel system of degenerate and singular types,
[21] [22] [23] [24]
Y. Sugiyama and H. Kunii,
Global existence and decay properties for a degenerate Keller-Segel model with a power factor in drift term,
[25]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[26]
P. Weidemaier,
Maximal regularity for parabolic equations with inhomogeneous boundary conditions in Sobolev spaces with mixed
Electron. Res. Announc. Amer. Math. Soc., 8 (2002), 47-51.
doi: 10.1090/S1079-6762-02-00104-X.
Google Scholar
[27] [28]
[1] [2]
Xie Li, Zhaoyin Xiang.
Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source.
[3]
Sachiko Ishida, Tomomi Yokota.
Remarks on the global existence of weak solutions
to quasilinear degenerate Keller-Segel systems.
[4]
Sachiko Ishida, Tomomi Yokota.
Blow-up in finite or infinite time for
quasilinear degenerate Keller-Segel systems of
parabolic-parabolic type.
[5] [6]
Sachiko Ishida, Yusuke Maeda, Tomomi Yokota.
Gradient estimate for solutions
to quasilinear non-degenerate Keller-Segel systems
on $\mathbb{R}^N$.
[7]
Kentarou Fujie, Chihiro Nishiyama, Tomomi Yokota.
Boundedness in a quasilinear parabolic-parabolic
Keller-Segel system with the sensitivity $v^{-1}S(u)$.
[8]
Wenting Cong, Jian-Guo Liu.
Uniform $L^{∞}$ boundedness for a degenerate parabolic-parabolic Keller-Segel model.
[9] [10]
Miaoqing Tian, Sining Zheng.
Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species.
[11]
Mengyao Ding, Sining Zheng.
$ L^γ$-measure criteria for boundedness in a quasilinear parabolic-elliptic Keller-Segel system with supercritical sensitivity.
[12]
Mengyao Ding, Xiangdong Zhao.
$ L^\sigma $-measure criteria for boundedness in a quasilinear parabolic-parabolic Keller-Segel system with supercritical sensitivity.
[13]
Hao Yu, Wei Wang, Sining Zheng.
Boundedness of solutions to a fully parabolic Keller-Segel system with nonlinear sensitivity.
[14]
Hao Yu, Wei Wang, Sining Zheng.
Global boundedness of solutions to a Keller-Segel system with nonlinear sensitivity.
[15]
Qi Wang, Jingyue Yang, Feng Yu.
Boundedness in logistic Keller-Segel models with nonlinear diffusion and sensitivity functions.
[16]
Jan Burczak, Rafael Granero-Belinchón.
Boundedness and homogeneous asymptotics for a fractional logistic Keller-Segel equations.
[17]
Kenneth H. Karlsen, Süleyman Ulusoy.
On a hyperbolic Keller-Segel system with degenerate nonlinear fractional diffusion.
[18]
Kentarou Fujie, Takasi Senba.
Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity.
[19]
Johannes Lankeit.
Infinite time blow-up of many solutions to a general quasilinear parabolic-elliptic Keller-Segel system.
[20]
Yoshifumi Mimura.
Critical mass of degenerate Keller-Segel system with no-flux and Neumann boundary conditions.
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.