text
stringlengths
138
2.38k
labels
sequencelengths
6
6
Predictions
sequencelengths
1
3
Title: Training Deep AutoEncoders for Collaborative Filtering, Abstract: This paper proposes a novel model for the rating prediction task in recommender systems which significantly outperforms previous state-of-the art models on a time-split Netflix data set. Our model is based on deep autoencoder with 6 layers and is trained end-to-end without any layer-wise pre-training. We empirically demonstrate that: a) deep autoencoder models generalize much better than the shallow ones, b) non-linear activation functions with negative parts are crucial for training deep models, and c) heavy use of regularization techniques such as dropout is necessary to prevent over-fiting. We also propose a new training algorithm based on iterative output re-feeding to overcome natural sparseness of collaborate filtering. The new algorithm significantly speeds up training and improves model performance. Our code is available at this https URL
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Unit circle rectification of the MVDR beamformer, Abstract: The sample matrix inversion (SMI) beamformer implements Capon's minimum variance distortionless (MVDR) beamforming using the sample covariance matrix (SCM). In a snapshot limited environment, the SCM is poorly conditioned resulting in a suboptimal performance from the SMI beamformer. Imposing structural constraints on the SCM estimate to satisfy known theoretical properties of the ensemble MVDR beamformer mitigates the impact of limited snapshots on the SMI beamformer performance. Toeplitz rectification and bounding the norm of weight vector are common approaches for such constrains. This paper proposes the unit circle rectification technique which constraints the SMI beamformer to satisfy a property of the ensemble MVDR beamformer: for narrowband planewave beamforming on a uniform linear array, the zeros of the MVDR weight array polynomial must fall on the unit circle. Numerical simulations show that the resulting unit circle MVDR (UC MVDR) beamformer frequently improves the suppression of both discrete interferers and white background noise compared to the classic SMI beamformer. Moreover, the UC MVDR beamformer is shown to suppress discrete interferers better than the MVDR beamformer diagonally loaded to maximize the SINR.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Jackknife Empirical Likelihood-based inference for S-Gini indices, Abstract: Widely used income inequality measure, Gini index is extended to form a family of income inequality measures known as Single-Series Gini (S-Gini) indices. In this study, we develop empirical likelihood (EL) and jackknife empirical likelihood (JEL) based inference for S-Gini indices. We prove that the limiting distribution of both EL and JEL ratio statistics are Chi-square distribution with one degree of freedom. Using the asymptotic distribution we construct EL and JEL based confidence intervals for realtive S-Gini indices. We also give bootstrap-t and bootstrap calibrated empirical likelihood confidence intervals for S-Gini indices. A numerical study is carried out to compare the performances of the proposed confidence interval with the bootstrap methods. A test for S-Gini indices based on jackknife empirical likelihood ratio is also proposed. Finally we illustrate the proposed method using an income data.
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Quantitative Finance" ]
Title: A maximum principle for free boundary minimal varieties of arbitrary codimension, Abstract: We establish a boundary maximum principle for free boundary minimal submanifolds in a Riemannian manifold with boundary, in any dimension and codimension. Our result holds more generally in the context of varifolds.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Optimal paths on the road network as directed polymers, Abstract: We analyze the statistics of the shortest and fastest paths on the road network between randomly sampled end points. To a good approximation, these optimal paths are found to be directed in that their lengths (at large scales) are linearly proportional to the absolute distance between them. This motivates comparisons to universal features of directed polymers in random media. There are similarities in scalings of fluctuations in length/time and transverse wanderings, but also important distinctions in the scaling exponents, likely due to long-range correlations in geographic and man-made features. At short scales the optimal paths are not directed due to circuitous excursions governed by a fat-tailed (power-law) probability distribution.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Optimal proportional reinsurance and investment for stochastic factor models, Abstract: In this work we investigate the optimal proportional reinsurance-investment strategy of an insurance company which wishes to maximize the expected exponential utility of its terminal wealth in a finite time horizon. Our goal is to extend the classical Cramer-Lundberg model introducing a stochastic factor which affects the intensity of the claims arrival process, described by a Cox process, as well as the insurance and reinsurance premia. Using the classical stochastic control approach based on the Hamilton-Jacobi-Bellman equation we characterize the optimal strategy and provide a verification result for the value function via classical solutions of two backward partial differential equations. Existence and uniqueness of these solutions are discussed. Results under various premium calculation principles are illustrated and a new premium calculation rule is proposed in order to get more realistic strategies and to better fit our stochastic factor model. Finally, numerical simulations are performed to obtain sensitivity analyses.
[ 0, 0, 0, 0, 0, 1 ]
[ "Quantitative Finance", "Mathematics", "Statistics" ]
Title: Formal Black-Box Analysis of Routing Protocol Implementations, Abstract: The Internet infrastructure relies entirely on open standards for its routing protocols. However, the majority of routers on the Internet are closed-source. Hence, there is no straightforward way to analyze them. Specifically, one cannot easily identify deviations of a router's routing functionality from the routing protocol's standard. Such deviations (either deliberate or inadvertent) are particularly important to identify since they may degrade the security or resiliency of the network. A model-based testing procedure is a technique that allows to systematically generate tests based on a model of the system to be tested; thereby finding deviations in the system compared to the model. However, applying such an approach to a complex multi-party routing protocol requires a prohibitively high number of tests to cover the desired functionality. We propose efficient and practical optimizations to the model-based testing procedure that are tailored to the analysis of routing protocols. These optimizations allow to devise a formal black-box method to unearth deviations in closed-source routing protocols' implementations. The method relies only on the ability to test the targeted protocol implementation and observe its output. Identification of the deviations is fully automatic. We evaluate our method against one of the complex and widely used routing protocols on the Internet -- OSPF. We search for deviations in the OSPF implementation of Cisco. Our evaluation identified numerous significant deviations that can be abused to compromise the security of a network. The deviations were confirmed by Cisco. We further employed our method to analyze the OSPF implementation of the Quagga Routing Suite. The analysis revealed one significant deviation. Subsequent to the disclosure of the deviations some of them were also identified by IBM, Lenovo and Huawei in their own products.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Active Decision Boundary Annotation with Deep Generative Models, Abstract: This paper is on active learning where the goal is to reduce the data annotation burden by interacting with a (human) oracle during training. Standard active learning methods ask the oracle to annotate data samples. Instead, we take a profoundly different approach: we ask for annotations of the decision boundary. We achieve this using a deep generative model to create novel instances along a 1d line. A point on the decision boundary is revealed where the instances change class. Experimentally we show on three data sets that our method can be plugged-in to other active learning schemes, that human oracles can effectively annotate points on the decision boundary, that our method is robust to annotation noise, and that decision boundary annotations improve over annotating data samples.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Extended Kitaev chain with longer-range hopping and pairing, Abstract: We consider the Kitaev chain model with finite and infinite range in the hopping and pairing parameters, looking in particular at the appearance of Majorana zero energy modes and massive edge modes. We study the system both in the presence and in the absence of time reversal symmetry, by means of topological invariants and exact diagonalization, disclosing very rich phase diagrams. In particular, for extended hopping and pairing terms, we can get as many Majorana modes at each end of the chain as the neighbors involved in the couplings. Finally we generalize the transfer matrix approach useful to calculate the zero-energy Majorana modes at the edges for a generic number of coupled neighbors.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Effects of atrial fibrillation on the arterial fluid dynamics: a modelling perspective, Abstract: Atrial fibrillation (AF) is the most common form of arrhythmia with accelerated and irregular heart rate (HR), leading to both heart failure and stroke and being responsible for an increase in cardiovascular morbidity and mortality. In spite of its importance, the direct effects of AF on the arterial hemodynamic patterns are not completely known to date. Based on a multiscale modelling approach, the proposed work investigates the effects of AF on the local arterial fluid dynamics. AF and normal sinus rhythm (NSR) conditions are simulated extracting 2000 $\mathrm{RR}$ heartbeats and comparing the most relevant cardiac and vascular parameters at the same HR (75 bpm). Present outcomes evidence that the arterial system is not able to completely absorb the AF-induced variability, which can be even amplified towards the peripheral circulation. AF is also able to locally alter the wave dynamics, by modifying the interplay between forward and backward signals. The sole heart rhythm variation (i.e., from NSR to AF) promotes an alteration of the regular dynamics at the arterial level which, in terms of pressure and peripheral perfusion, suggests a modification of the physiological phenomena ruled by periodicity (e.g., regular organ perfusion)and a possible vascular dysfunction due to the prolonged exposure to irregular and extreme values. The present study represents a first modeling approach to characterize the variability of arterial hemodynamics in presence of AF, which surely deserves further clinical investigation.
[ 0, 0, 0, 0, 1, 0 ]
[ "Quantitative Biology", "Physics" ]
Title: Controllability and optimal control of the transport equation with a localized vector field, Abstract: We study controllability of a Partial Differential Equation of transport type, that arises in crowd models. We are interested in controlling such system with a control being a Lipschitz vector field on a fixed control set $\omega$. We prove that, for each initial and final configuration, one can steer one to another with such class of controls only if the uncontrolled dynamics allows to cross the control set $\omega$. We also prove a minimal time result for such systems. We show that the minimal time to steer one initial configuration to another is related to the condition of having enough mass in $\omega$ to feed the desired final configuration.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: The Value of Sharing Intermittent Spectrum, Abstract: Recent initiatives by regulatory agencies to increase spectrum resources available for broadband access include rules for sharing spectrum with high-priority incumbents. We study a model in which wireless Service Providers (SPs) charge for access to their own exclusive-use (licensed) band along with access to an additional shared band. The total, or delivered price in each band is the announced price plus a congestion cost, which depends on the load, or total users normalized by the bandwidth. The shared band is intermittently available with some probability, due to incumbent activity, and when unavailable, any traffic carried on that band must be shifted to licensed bands. The SPs then compete for quantity of users. We show that the value of the shared band depends on the relative sizes of the SPs: large SPs with more bandwidth are better able to absorb the variability caused by intermittency than smaller SPs. However, as the amount of shared spectrum increases, the large SPs may not make use of it. In that scenario shared spectrum creates more value than splitting it among the SPs for exclusive use. We also show that fixing the average amount of available shared bandwidth, increasing the reliability of the band is preferable to increasing the bandwidth.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Quantitative Finance" ]
Title: Origin of layer dependence in band structures of two-dimensional materials, Abstract: We study the origin of layer dependence in band structures of two-dimensional materials. We find that the layer dependence, at the density functional theory (DFT) level, is a result of quantum confinement and the non-linearity of the exchange-correlation functional. We use this to develop an efficient scheme for performing DFT and GW calculations of multilayer systems. We show that the DFT and quasiparticle band structures of a multilayer system can be derived from a single calculation on a monolayer of the material. We test this scheme on multilayers of MoS$_2$, graphene and phosphorene. This new scheme yields results in excellent agreement with the standard methods at a fraction of the computation cost. This helps overcome the challenge of performing fully converged GW calculations on multilayers of 2D materials, particularly in the case of transition metal dichalcogenides which involve very stringent convergence parameters.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Computer Science" ]
Title: Sharp measure contraction property for generalized H-type Carnot groups, Abstract: We prove that H-type Carnot groups of rank $k$ and dimension $n$ satisfy the $\mathrm{MCP}(K,N)$ if and only if $K\leq 0$ and $N \geq k+3(n-k)$. The latter integer coincides with the geodesic dimension of the Carnot group. The same result holds true for the larger class of generalized H-type Carnot groups introduced in this paper, and for which we compute explicitly the optimal synthesis. This constitutes the largest class of Carnot groups for which the curvature exponent coincides with the geodesic dimension. We stress that generalized H-type Carnot groups have step 2, include all corank 1 groups and, in general, admit abnormal minimizing curves. As a corollary, we prove the absolute continuity of the Wasserstein geodesics for the quadratic cost on all generalized H-type Carnot groups.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Twisted Quantum Double Model of Topological Orders with Boundaries, Abstract: We generalize the twisted quantum double model of topological orders in two dimensions to the case with boundaries by systematically constructing the boundary Hamiltonians. Given the bulk Hamiltonian defined by a gauge group $G$ and a three-cocycle in the third cohomology group of $G$ over $U(1)$, a boundary Hamiltonian can be defined by a subgroup $K$ of $G$ and a two-cochain in the second cochain group of $K$ over $U(1)$. The consistency between the bulk and boundary Hamiltonians is dictated by what we call the Frobenius condition that constrains the two-cochain given the three-cocyle. We offer a closed-form formula computing the ground state degeneracy of the model on a cylinder in terms of the input data only, which can be naturally generalized to surfaces with more boundaries. We also explicitly write down the ground-state wavefunction of the model on a disk also in terms of the input data only.
[ 0, 1, 1, 0, 0, 0 ]
[ "Physics" ]
Title: Nearly resolution V plans on blocks of small size, Abstract: In Bagchi (2010) main effect plans "orthogonal through the block factor" (POTB) have been constructed. The main advantages of a POTB are that (a) it may exist in a set up where an "usual" orthogonal main effect plan (OMEP) cannot exist and (b) the data analysis is nearly as simple as an OMEP. In the present paper we extend this idea and define the concept of orthogonality between a pair of factorial effects ( main effects or interactions) "through the block factor" in the context of a symmetrical experiment. We consider plans generated from an initial plan by adding runs. For such a plan we have derived necessary and sufficient conditions for a pair of effects to be orthogonal through the block factor in terms of the generators. We have also derived a sufficient condition on the generators so as to turn a pair of effects aliased in the initial plan separated in the final plan. The theory developed is illustrated with plans for experiments with three-level factors in situations where interactions between three or more factors are absent. We have constructed plans with blocks of size four and fewer runs than a resolution $V$ plan estimating all main effects and all but at most one two-factor interactions.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: In situ accretion of gaseous envelopes on to planetary cores embedded in evolving protoplanetary discs, Abstract: The core accretion hypothesis posits that planets with significant gaseous envelopes accreted them from their protoplanetary discs after the formation of rocky/icy cores. Observations indicate that such exoplanets exist at a broad range of orbital radii, but it is not known whether they accreted their envelopes in situ, or originated elsewhere and migrated to their current locations. We consider the evolution of solid cores embedded in evolving viscous discs that undergo gaseous envelope accretion in situ with orbital radii in the range $0.1-10\rm au$. Additionally, we determine the long-term evolution of the planets that had no runaway gas accretion phase after disc dispersal. We find: (i) Planets with $5 \rm M_{\oplus}$ cores never undergo runaway accretion. The most massive envelope contained $2.8 \rm M_{\oplus}$ with the planet orbiting at $10 \rm au$. (ii) Accretion is more efficient onto $10 \rm M_{\oplus}$ and $15 \rm M_{\oplus}$ cores. For orbital radii $a_{\rm p} \ge 0.5 \rm au$, $15 \rm M_{\oplus}$ cores always experienced runaway gas accretion. For $a_{\rm p} \ge 5 \rm au$, all but one of the $10 \rm M_{\oplus}$ cores experienced runaway gas accretion. No planets experienced runaway growth at $a_{\rm p} = 0.1 \rm au$. (iii) We find that, after disc dispersal, planets with significant gaseous envelopes cool and contract on Gyr time-scales, the contraction time being sensitive to the opacity assumed. Our results indicate that Hot Jupiters with core masses $\lesssim 15 \rm M_{\oplus}$ at $\lesssim 0.1 \rm au$ likely accreted their gaseous envelopes at larger distances and migrated inwards. Consistently with the known exoplanet population, Super-Earths and mini-Neptunes at small radii during the disc lifetime, accrete only modest gaseous envelopes.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: New Two Step Laplace Adam-Bashforth Method for Integer an Non integer Order Partial Differential Equations, Abstract: This paper presents a novel method that allows to generalise the use of the Adam-Bashforth to Partial Differential Equations with local and non local operator. The Method derives a two step Adam-Bashforth numerical scheme in Laplace space and the solution is taken back into the real space via inverse Laplace transform. The method yields a powerful numerical algorithm for fractional order derivative where the usually very difficult to manage summation in the numerical scheme disappears. Error Analysis of the method is also presented. Applications of the method and numerical simulations are presented on a wave-equation like, and on a fractional order diffusion equation.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics", "Computer Science" ]
Title: The Impact of Alternation, Abstract: Alternating automata have been widely used to model and verify systems that handle data from finite domains, such as communication protocols or hardware. The main advantage of the alternating model of computation is that complementation is possible in linear time, thus allowing to concisely encode trace inclusion problems that occur often in verification. In this paper we consider alternating automata over infinite alphabets, whose transition rules are formulae in a combined theory of booleans and some infinite data domain, that relate past and current values of the data variables. The data theory is not fixed, but rather it is a parameter of the class. We show that union, intersection and complementation are possible in linear time in this model and, though the emptiness problem is undecidable, we provide two efficient semi-algorithms, inspired by two state-of-the-art abstraction refinement model checking methods: lazy predicate abstraction \cite{HJMS02} and the \impact~ semi-algorithm \cite{mcmillan06}. We have implemented both methods and report the results of an experimental comparison.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Robust Causal Estimation in the Large-Sample Limit without Strict Faithfulness, Abstract: Causal effect estimation from observational data is an important and much studied research topic. The instrumental variable (IV) and local causal discovery (LCD) patterns are canonical examples of settings where a closed-form expression exists for the causal effect of one variable on another, given the presence of a third variable. Both rely on faithfulness to infer that the latter only influences the target effect via the cause variable. In reality, it is likely that this assumption only holds approximately and that there will be at least some form of weak interaction. This brings about the paradoxical situation that, in the large-sample limit, no predictions are made, as detecting the weak edge invalidates the setting. We introduce an alternative approach by replacing strict faithfulness with a prior that reflects the existence of many 'weak' (irrelevant) and 'strong' interactions. We obtain a posterior distribution over the target causal effect estimator which shows that, in many cases, we can still make good estimates. We demonstrate the approach in an application on a simple linear-Gaussian setting, using the MultiNest sampling algorithm, and compare it with established techniques to show our method is robust even when strict faithfulness is violated.
[ 1, 0, 0, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Understanding looping kinetics of a long polymer molecule in solution. Exact solution for delocalized sink model, Abstract: The fundamental understanding of loop formation of long polymer chains in solution has been an important thread of research for several theoretical and experimental studies. Loop formations are important phenomenological parameters in many important biological processes. Here we give a general method for finding an exact analytical solution for the occurrence of looping of a long polymer chains in solution modeled by using a Smoluchowski-like equation with a delocalized sink. The average rate constant for the delocalized sink is explicitly expressed in terms of the corresponding rate constants for localized sinks with different initial conditions. Simple analytical expressions are provided for average rate constant.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics", "Quantitative Biology" ]
Title: High-Level Concepts for Affective Understanding of Images, Abstract: This paper aims to bridge the affective gap between image content and the emotional response of the viewer it elicits by using High-Level Concepts (HLCs). In contrast to previous work that relied solely on low-level features or used convolutional neural network (CNN) as a black-box, we use HLCs generated by pretrained CNNs in an explicit way to investigate the relations/associations between these HLCs and a (small) set of Ekman's emotional classes. As a proof-of-concept, we first propose a linear admixture model for modeling these relations, and the resulting computational framework allows us to determine the associations between each emotion class and certain HLCs (objects and places). This linear model is further extended to a nonlinear model using support vector regression (SVR) that aims to predict the viewer's emotional response using both low-level image features and HLCs extracted from images. These class-specific regressors are then assembled into a regressor ensemble that provide a flexible and effective predictor for predicting viewer's emotional responses from images. Experimental results have demonstrated that our results are comparable to existing methods, with a clear view of the association between HLCs and emotional classes that is ostensibly missing in most existing work.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: A Nonparametric Method for Producing Isolines of Bivariate Exceedance Probabilities, Abstract: We present a method for drawing isolines indicating regions of equal joint exceedance probability for bivariate data. The method relies on bivariate regular variation, a dependence framework widely used for extremes. This framework enables drawing isolines corresponding to very low exceedance probabilities and these lines may lie beyond the range of the data. The method we utilize for characterizing dependence in the tail is largely nonparametric. Furthermore, we extend this method to the case of asymptotic independence and propose a procedure which smooths the transition from asymptotic independence in the interior to the first-order behavior on the axes. We propose a diagnostic plot for assessing isoline estimate and choice of smoothing, and a bootstrap procedure to visually assess uncertainty.
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: Finite element error analysis for measure-valued optimal control problems governed by a 1D wave equation with variable coefficients, Abstract: This work is concerned with the optimal control problems governed by a 1D wave equation with variable coefficients and the control spaces $\mathcal M_T$ of either measure-valued functions $L_{w^*}^2(I,\mathcal M(\Omega))$ or vector measures $\mathcal M(\Omega,L^2(I))$. The cost functional involves the standard quadratic tracking terms and the regularization term $\alpha\|u\|_{\mathcal M_T}$ with $\alpha>0$. We construct and study three-level in time bilinear finite element discretizations for this class of problems. The main focus lies on the derivation of error estimates for the optimal state variable and the error measured in the cost functional. The analysis is mainly based on some previous results of the authors. The numerical results are included.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: Real eigenvalues of a non-self-adjoint perturbation of the self-adjoint Zakharov-Shabat operator, Abstract: We study the eigenvalues of the self-adjoint Zakharov-Shabat operator corresponding to the defocusing nonlinear Schrodinger equation in the inverse scattering method. Real eigenvalues exist when the square of the potential has a simple well. We derive two types of quantization condition for the eigenvalues by using the exact WKB method, and show that the eigenvalues stay real for a sufficiently small non-self-adjoint perturbation when the potential has some PT-like symmetry.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Fourier-like multipliers and applications for integral operators, Abstract: Timelimited functions and bandlimited functions play a fundamental role in signal and image processing. But by the uncertainty principles, a signal cannot be simultaneously time and bandlimited. A natural assumption is thus that a signal is almost time and almost bandlimited. The aim of this paper is to prove that the set of almost time and almost bandlimited signals is not excluded from the uncertainty principles. The transforms under consideration are integral operators with bounded kernels for which there is a Parseval Theorem. Then we define the wavelet multipliers for this class of operators, and study their boundedness and Schatten class properties. We show that the wavelet multiplier is unitary equivalent to a scalar multiple of the phase space restriction operator. Moreover we prove that a signal which is almost time and almost bandlimited can be approximated by its projection on the span of the first eigenfunctions of the phase space restriction operator, corresponding to the largest eigenvalues which are close to one.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: Inferring Properties of the ISM from Supernova Remnant Size Distributions, Abstract: We model the size distribution of supernova remnants to infer the surrounding ISM density. Using simple, yet standard SNR evolution models, we find that the distribution of ambient densities is remarkably narrow; either the standard assumptions about SNR evolution are wrong, or observable SNRs are biased to a narrow range of ambient densities. We show that the size distributions are consistent with log-normal, which severely limits the number of model parameters in any SNR population synthesis model. Simple Monte Carlo simulations demonstrate that the size distribution is indistinguishable from log-normal when the SNR sample size is less than 600. This implies that these SNR distributions provide only information on the mean and variance, yielding additional information only when the sample size grows larger than $\sim{600}$ SNRs. To infer the parameters of the ambient density, we use Bayesian statistical inference under the assumption that SNR evolution is dominated by the Sedov phase. In particular, we use the SNR sizes and explosion energies to estimate the mean and variance of the ambient medium surrounding SNR progenitors. We find that the mean ISM particle density around our sample of SNRs is $\mu_{\log{n}} = -1.33$, in $\log_{10}$ of particles per cubic centimeter, with variance $\sigma^2_{\log{n}} = 0.49$. If interpreted at face value, this implies that most SNRs result from supernovae propagating in the warm, ionized medium. However, it is also likely that either SNR evolution is not dominated by the simple Sedov evolution or SNR samples are biased to the warm, ionized medium (WIM).
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Statistics" ]
Title: Mining Target Attribute Subspace and Set of Target Communities in Large Attributed Networks, Abstract: Community detection provides invaluable help for various applications, such as marketing and product recommendation. Traditional community detection methods designed for plain networks may not be able to detect communities with homogeneous attributes inside on attributed networks with attribute information. Most of recent attribute community detection methods may fail to capture the requirements of a specific application and not be able to mine the set of required communities for a specific application. In this paper, we aim to detect the set of target communities in the target subspace which has some focus attributes with large importance weights satisfying the requirements of a specific application. In order to improve the university of the problem, we address the problem in an extreme case where only two sample nodes in any potential target community are provided. A Target Subspace and Communities Mining (TSCM) method is proposed. In TSCM, a sample information extension method is designed to extend the two sample nodes to a set of exemplar nodes from which the target subspace is inferred. Then the set of target communities are located and mined based on the target subspace. Experiments on synthetic datasets demonstrate the effectiveness and efficiency of our method and applications on real-world datasets show its application values.
[ 1, 1, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Cascaded Coded Distributed Computing on Heterogeneous Networks, Abstract: Coded distributed computing (CDC) introduced by Li et al. in 2015 offers an efficient approach to trade computing power to reduce the communication load in general distributed computing frameworks such as MapReduce. For the more general cascaded CDC, Map computations are repeated at $r$ nodes to significantly reduce the communication load among nodes tasked with computing $Q$ Reduce functions $s$ times. While an achievable cascaded CDC scheme was proposed, it only operates on homogeneous networks, where the storage, computation load and communication load of each computing node is the same. In this paper, we address this limitation by proposing a novel combinatorial design which operates on heterogeneous networks where nodes have varying storage and computing capabilities. We provide an analytical characterization of the computation-communication trade-off and show that it is optimal within a constant factor and could outperform the state-of-the-art homogeneous schemes.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Thermal and non-thermal emission from the cocoon of a gamma-ray burst jet, Abstract: We present hydrodynamic simulations of the hot cocoon produced when a relativistic jet passes through the gamma-ray burst (GRB) progenitor star and its environment, and we compute the lightcurve and spectrum of the radiation emitted by the cocoon. The radiation from the cocoon has a nearly thermal spectrum with a peak in the X-ray band, and it lasts for a few minutes in the observer frame; the cocoon radiation starts at roughly the same time as when $\gamma$-rays from a burst trigger detectors aboard GRB satellites. The isotropic cocoon luminosity ($\sim 10^{47}$ erg s$^{-1}$) is of the same order of magnitude as the X-ray luminosity of a typical long-GRB afterglow during the plateau phase. This radiation should be identifiable in the Swift data because of its nearly thermal spectrum which is distinct from the somewhat brighter power-law component. The detection of this thermal component would provide information regarding the size and density stratification of the GRB progenitor star. Photons from the cocoon are also inverse-Compton (IC) scattered by electrons in the relativistic jet. We present the IC lightcurve and spectrum, by post-processing the results of the numerical simulations. The IC spectrum lies in 10 keV--MeV band for typical GRB parameters. The detection of this IC component would provide an independent measurement of GRB jet Lorentz factor and it would also help to determine the jet magnetisation parameter.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Astrophysics" ]
Title: Higher Tetrahedral Algebras, Abstract: We introduce and study the higher tetrahedral algebras, an exotic family of finite-dimensional tame symmetric algebras over an algebraically closed field. The Gabriel quiver of such an algebra is the triangulation quiver associated to the coherent orientation of the tetrahedron. Surprisingly, these algebras occurred in the classification of all algebras of generalised quaternion type, but are not weighted surface algebras. We prove that a higher tetrahedral algebra is periodic if and only if it is non-singular.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Adaptive multi-penalty regularization based on a generalized Lasso path, Abstract: For many algorithms, parameter tuning remains a challenging and critical task, which becomes tedious and infeasible in a multi-parameter setting. Multi-penalty regularization, successfully used for solving undetermined sparse regression of problems of unmixing type where signal and noise are additively mixed, is one of such examples. In this paper, we propose a novel algorithmic framework for an adaptive parameter choice in multi-penalty regularization with a focus on the correct support recovery. Building upon the theory of regularization paths and algorithms for single-penalty functionals, we extend these ideas to a multi-penalty framework by providing an efficient procedure for the construction of regions containing structurally similar solutions, i.e., solutions with the same sparsity and sign pattern, over the whole range of parameters. Combining this with a model selection criterion, we can choose regularization parameters in a data-adaptive manner. Another advantage of our algorithm is that it provides an overview on the solution stability over the whole range of parameters. This can be further exploited to obtain additional insights into the problem of interest. We provide a numerical analysis of our method and compare it to the state-of-the-art single-penalty algorithms for compressed sensing problems in order to demonstrate the robustness and power of the proposed algorithm.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics", "Statistics" ]
Title: Shape Generation using Spatially Partitioned Point Clouds, Abstract: We propose a method to generate 3D shapes using point clouds. Given a point-cloud representation of a 3D shape, our method builds a kd-tree to spatially partition the points. This orders them consistently across all shapes, resulting in reasonably good correspondences across all shapes. We then use PCA analysis to derive a linear shape basis across the spatially partitioned points, and optimize the point ordering by iteratively minimizing the PCA reconstruction error. Even with the spatial sorting, the point clouds are inherently noisy and the resulting distribution over the shape coefficients can be highly multi-modal. We propose to use the expressive power of neural networks to learn a distribution over the shape coefficients in a generative-adversarial framework. Compared to 3D shape generative models trained on voxel-representations, our point-based method is considerably more light-weight and scalable, with little loss of quality. It also outperforms simpler linear factor models such as Probabilistic PCA, both qualitatively and quantitatively, on a number of categories from the ShapeNet dataset. Furthermore, our method can easily incorporate other point attributes such as normal and color information, an additional advantage over voxel-based representations.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Parameter Estimation in Mean Reversion Processes with Periodic Functional Tendency, Abstract: This paper describes the procedure to estimate the parameters in mean reversion processes with functional tendency defined by a periodic continuous deterministic function, expressed as a series of truncated Fourier. Two phases of estimation are defined, in the first phase through Gaussian techniques using the Euler-Maruyama discretization, we obtain the maximum likelihood function, that will allow us to find estimators of the external parameters and an estimation of the expected value of the process. In the second phase, a reestimate of the periodic functional tendency with it's parameters of phase and amplitude is carried out, this will allow, improve the initial estimation. Some experimental result using simulated data sets are graphically illustrated.
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: User Interface (UI) Design Issues for the Multilingual Users: A Case Study, Abstract: A multitude of web and desktop applications are now widely available in diverse human languages. This paper explores the design issues that are specifically relevant for multilingual users. It reports on the continued studies of Information System (IS) issues and users' behaviour across cross-cultural and transnational boundaries. Taking the BBC website as a model that is internationally recognised, usability tests were conducted to compare different versions of the website. The dependant variables derived from the questionnaire were analysed (via descriptive statistics) to elucidate the multilingual UI design issues. Using Principal Component Analysis (PCA), five de-correlated variables were identified which were then used for hypotheses tests. A modified version of Herzberg's Hygiene-motivational Theory about the Workplace was applied to assess the components used in the website. Overall, it was concluded that the English versions of the website gave superior usability results and this implies the need for deeper study of the problems in usability of the translated versions.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: NeuroRule: A Connectionist Approach to Data Mining, Abstract: Classification, which involves finding rules that partition a given data set into disjoint groups, is one class of data mining problems. Approaches proposed so far for mining classification rules for large databases are mainly decision tree based symbolic learning methods. The connectionist approach based on neural networks has been thought not well suited for data mining. One of the major reasons cited is that knowledge generated by neural networks is not explicitly represented in the form of rules suitable for verification or interpretation by humans. This paper examines this issue. With our newly developed algorithms, rules which are similar to, or more concise than those generated by the symbolic methods can be extracted from the neural networks. The data mining process using neural networks with the emphasis on rule extraction is described. Experimental results and comparison with previously published works are presented.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Joint Trajectory and Communication Design for UAV-Enabled Multiple Access, Abstract: Unmanned aerial vehicles (UAVs) have attracted significant interest recently in wireless communication due to their high maneuverability, flexible deployment, and low cost. This paper studies a UAV-enabled wireless network where the UAV is employed as an aerial mobile base station (BS) to serve a group of users on the ground. To achieve fair performance among users, we maximize the minimum throughput over all ground users by jointly optimizing the multiuser communication scheduling and UAV trajectory over a finite horizon. The formulated problem is shown to be a mixed integer non-convex optimization problem that is difficult to solve in general. We thus propose an efficient iterative algorithm by applying the block coordinate descent and successive convex optimization techniques, which is guaranteed to converge to at least a locally optimal solution. To achieve fast convergence and stable throughput, we further propose a low-complexity initialization scheme for the UAV trajectory design based on the simple circular trajectory. Extensive simulation results are provided which show significant throughput gains of the proposed design as compared to other benchmark schemes.
[ 1, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Partial dust obscuration in active galactic nuclei as a cause of broad-line profile and lag variability, and apparent accretion disc inhomogeneities, Abstract: The profiles of the broad emission lines of active galactic nuclei (AGNs) and the time delays in their response to changes in the ionizing continuum ("lags") give information about the structure and kinematics of the inner regions of AGNs. Line profiles are also our main way of estimating the masses of the supermassive black holes (SMBHs). However, the profiles often show ill-understood, asymmetric structure and velocity-dependent lags vary with time. Here we show that partial obscuration of the broad-line region (BLR) by outflowing, compact, dusty clumps produces asymmetries and velocity-dependent lags similar to those observed. Our model explains previously inexplicable changes in the ratios of the hydrogen lines with time and velocity, the lack of correlation of changes in line profiles with variability of the central engine, the velocity dependence of lags, and the change of lags with time. We propose that changes on timescales longer than the light-crossing time do not come from dynamical changes in the BLR, but are a natural result of the effect of outflowing dusty clumps driven by radiation pressure acting on the dust. The motion of these clumps offers an explanation of long-term changes in polarization. The effects of the dust complicate the study of the structure and kinematics of the BLR and the search for sub-parsec SMBH binaries. Partial obscuration of the accretion disc can also provide the local fluctuations in luminosity that can explain sizes deduced from microlensing.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: From Plants to Landmarks: Time-invariant Plant Localization that uses Deep Pose Regression in Agricultural Fields, Abstract: Agricultural robots are expected to increase yields in a sustainable way and automate precision tasks, such as weeding and plant monitoring. At the same time, they move in a continuously changing, semi-structured field environment, in which features can hardly be found and reproduced at a later time. Challenges for Lidar and visual detection systems stem from the fact that plants can be very small, overlapping and have a steadily changing appearance. Therefore, a popular way to localize vehicles with high accuracy is based on ex- pensive global navigation satellite systems and not on natural landmarks. The contribution of this work is a novel image- based plant localization technique that uses the time-invariant stem emerging point as a reference. Our approach is based on a fully convolutional neural network that learns landmark localization from RGB and NIR image input in an end-to-end manner. The network performs pose regression to generate a plant location likelihood map. Our approach allows us to cope with visual variances of plants both for different species and different growth stages. We achieve high localization accuracies as shown in detailed evaluations of a sugar beet cultivation phase. In experiments with our BoniRob we demonstrate that detections can be robustly reproduced with centimeter accuracy.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: Higher Theory and the Three Problems of Physics, Abstract: According to the Butterfield--Isham proposal, to understand quantum gravity we must revise the way we view the universe of mathematics. However, this paper demonstrates that the current elaborations of this programme neglect quantum interactions. The paper then introduces the Faddeev--Mickelsson anomaly which obstructs the renormalization of Yang--Mills theory, suggesting that to theorise on many-particle systems requires a many-topos view of mathematics itself: higher theory. As our main contribution, the topos theoretic framework is used to conceptualise the fact that there are principally three different quantisation problems, the differences of which have been ignored not just by topos physicists but by most philosophers of science. We further argue that if higher theory proves out to be necessary for understanding quantum gravity, its implications to philosophy will be foundational: higher theory challenges the propositional concept of truth and thus the very meaning of theorising in science.
[ 0, 1, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Bayesian Approaches to Distribution Regression, Abstract: Distribution regression has recently attracted much interest as a generic solution to the problem of supervised learning where labels are available at the group level, rather than at the individual level. Current approaches, however, do not propagate the uncertainty in observations due to sampling variability in the groups. This effectively assumes that small and large groups are estimated equally well, and should have equal weight in the final regression. We account for this uncertainty with a Bayesian distribution regression formalism, improving the robustness and performance of the model when group sizes vary. We frame our models in a neural network style, allowing for simple MAP inference using backpropagation to learn the parameters, as well as MCMC-based inference which can fully propagate uncertainty. We demonstrate our approach on illustrative toy datasets, as well as on a challenging problem of predicting age from images.
[ 1, 0, 0, 1, 0, 0 ]
[ "Statistics", "Computer Science" ]
Title: Atomic-Scale Structure Relaxation, Chemistry and Charge Distribution of Dislocation Cores in SrTiO3, Abstract: By using the state-of-the-art microscopy and spectroscopy in aberration-corrected scanning transmission electron microscopes, we determine the atomic arrangements, occupancy, elemental distribution, and the electronic structures of dislocation cores in the 10°tilted SrTiO3 bicrystal. We identify that there are two different types of oxygen deficient dislocation cores, i.e., the SrO plane terminated Sr0.82Ti0.85O3-x (Ti3.67+, 0.48<x<0.91) and TiO2 plane terminated Sr0.63Ti0.90O3-y (Ti3.60+, 0.57<y<1). They have the same Burgers vector of a[100] but different atomic arrangements and chemical properties. Besides the oxygen vacancies, Sr vacancies and rocksalt-like titanium oxide reconstruction are also identified in the dislocation core with TiO2 plane termination. Our atomic-scale study reveals the true atomic structures and chemistry of individual dislocation cores, providing useful insights into understanding the properties of dislocations and grain boundaries.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Self-similar solutions of fragmentation equations revisited, Abstract: We study the large time behaviour of the mass (size) of particles described by the fragmentation equation with homogeneous breakup kernel. We give necessary and sufficient conditions for the convergence of solutions to the unique self-similar solution.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Ultracold atoms in multiple-radiofrequency dressed adiabatic potentials, Abstract: We present the first experimental demonstration of a multiple-radiofrequency dressed potential for the configurable magnetic confinement of ultracold atoms. We load cold $^{87}$Rb atoms into a double well potential with an adjustable barrier height, formed by three radiofrequencies applied to atoms in a static quadrupole magnetic field. Our multiple-radiofrequency approach gives precise control over the double well characteristics, including the depth of individual wells and the height of the barrier, and enables reliable transfer of atoms between the available trapping geometries. We have characterised the multiple-radiofrequency dressed system using radiofrequency spectroscopy, finding good agreement with the eigenvalues numerically calculated using Floquet theory. This method creates trapping potentials that can be reconfigured by changing the amplitudes, polarizations and frequencies of the applied dressing fields, and easily extended with additional dressing frequencies.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Distribution Matching in Variational Inference, Abstract: We show that Variational Autoencoders consistently fail to learn marginal distributions in latent and visible space. We ask whether this is a consequence of matching conditional distributions, or a limitation of explicit model and posterior distributions. We explore alternatives provided by marginal distribution matching and implicit distributions through the use of Generative Adversarial Networks in variational inference. We perform a large-scale evaluation of several VAE-GAN hybrids and explore the implications of class probability estimation for learning distributions. We conclude that at present VAE-GAN hybrids have limited applicability: they are harder to scale, evaluate, and use for inference compared to VAEs; and they do not improve over the generation quality of GANs.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Orthogonal groups in characteristic 2 acting on polytopes of high rank, Abstract: We show that for all integers $m\geq 2$, and all integers $k\geq 2$, the orthogonal groups $\Orth^{\pm}(2m,\Fk)$ act on abstract regular polytopes of rank $2m$, and the symplectic groups $\Sp(2m,\Fk)$ act on abstract regular polytopes of rank $2m+1$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Free LSD: Prior-Free Visual Landing Site Detection for Autonomous Planes, Abstract: Full autonomy for fixed-wing unmanned aerial vehicles (UAVs) requires the capability to autonomously detect potential landing sites in unknown and unstructured terrain, allowing for self-governed mission completion or handling of emergency situations. In this work, we propose a perception system addressing this challenge by detecting landing sites based on their texture and geometric shape without using any prior knowledge about the environment. The proposed method considers hazards within the landing region such as terrain roughness and slope, surrounding obstacles that obscure the landing approach path, and the local wind field that is estimated by the on-board EKF. The latter enables applicability of the proposed method on small-scale autonomous planes without landing gear. A safe approach path is computed based on the UAV dynamics, expected state estimation and actuator uncertainty, and the on-board computed elevation map. The proposed framework has been successfully tested on photo-realistic synthetic datasets and in challenging real-world environments.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Physics" ]
Title: Reconstruction formulas for Photoacoustic Imaging in Attenuating Media, Abstract: In this paper we study the problem of photoacoustic inversion in a weakly attenuating medium. We present explicit reconstruction formulas in such media and show that the inversion based on such formulas is moderately ill--posed. Moreover, we present a numerical algorithm for imaging and demonstrate in numerical experiments the feasibility of this approach.
[ 0, 0, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Rank Determination for Low-Rank Data Completion, Abstract: Recently, fundamental conditions on the sampling patterns have been obtained for finite completability of low-rank matrices or tensors given the corresponding ranks. In this paper, we consider the scenario where the rank is not given and we aim to approximate the unknown rank based on the location of sampled entries and some given completion. We consider a number of data models, including single-view matrix, multi-view matrix, CP tensor, tensor-train tensor and Tucker tensor. For each of these data models, we provide an upper bound on the rank when an arbitrary low-rank completion is given. We characterize these bounds both deterministically, i.e., with probability one given that the sampling pattern satisfies certain combinatorial properties, and probabilistically, i.e., with high probability given that the sampling probability is above some threshold. Moreover, for both single-view matrix and CP tensor, we are able to show that the obtained upper bound is exactly equal to the unknown rank if the lowest-rank completion is given. Furthermore, we provide numerical experiments for the case of single-view matrix, where we use nuclear norm minimization to find a low-rank completion of the sampled data and we observe that in most of the cases the proposed upper bound on the rank is equal to the true rank.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Network structure from rich but noisy data, Abstract: Driven by growing interest in the sciences, industry, and among the broader public, a large number of empirical studies have been conducted in recent years of the structure of networks ranging from the internet and the world wide web to biological networks and social networks. The data produced by these experiments are often rich and multimodal, yet at the same time they may contain substantial measurement error. In practice, this means that the true network structure can differ greatly from naive estimates made from the raw data, and hence that conclusions drawn from those naive estimates may be significantly in error. In this paper we describe a technique that circumvents this problem and allows us to make optimal estimates of the true structure of networks in the presence of both richly textured data and significant measurement uncertainty. We give example applications to two different social networks, one derived from face-to-face interactions and one from self-reported friendships.
[ 1, 1, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Algebraic Foundations of Proof Refinement, Abstract: We contribute a general apparatus for dependent tactic-based proof refinement in the LCF tradition, in which the statements of subgoals may express a dependency on the proofs of other subgoals; this form of dependency is extremely useful and can serve as an algorithmic alternative to extensions of LCF based on non-local instantiation of schematic variables. Additionally, we introduce a novel behavioral distinction between refinement rules and tactics based on naturality. Our framework, called Dependent LCF, is already deployed in the nascent RedPRL proof assistant for computational cubical type theory.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: iCorr : Complex correlation method to detect origin of replication in prokaryotic and eukaryotic genomes, Abstract: Computational prediction of origin of replication (ORI) has been of great interest in bioinformatics and several methods including GC Skew, Z curve, auto-correlation etc. have been explored in the past. In this paper, we have extended the auto-correlation method to predict ORI location with much higher resolution for prokaryotes. The proposed complex correlation method (iCorr) converts the genome sequence into a sequence of complex numbers by mapping the nucleotides to {+1,-1,+i,-i} instead of {+1,-1} used in the auto-correlation method (here, 'i' is square root of -1). Thus, the iCorr method uses information about the positions of all the four nucleotides unlike the earlier auto-correlation method which uses the positional information of only one nucleotide. Also, this earlier method required visual inspection of the obtained graphs to identify the location of origin of replication. The proposed iCorr method does away with this need and is able to identify the origin location simply by picking the peak in the iCorr graph. The iCorr method also works for a much smaller segment size compared to the earlier auto-correlation method, which can be very helpful in experimental validation of the computational predictions. We have also developed a variant of the iCorr method to predict ORI location in eukaryotes and have tested it with the experimentally known origin locations of S. cerevisiae with an average accuracy of 71.76%.
[ 0, 1, 0, 0, 0, 0 ]
[ "Quantitative Biology", "Computer Science" ]
Title: Simons' type formula for slant submanifolds of complex space form, Abstract: In this paper, we study a slant submanifold of a complex space form. We also obtain an integral formula of Simons' type for a Kaehlerian slant submanifold in a complex space form and apply it to prove our main result.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: An Agile Software Engineering Method to Design Blockchain Applications, Abstract: Cryptocurrencies and their foundation technology, the Blockchain, are reshaping finance and economics, allowing a decentralized approach enabling trusted applications with no trusted counterpart. More recently, the Blockchain and the programs running on it, called Smart Contracts, are also finding more and more applications in all fields requiring trust and sound certifications. Some people have come to the point of saying that the "Blockchain revolution" can be compared to that of the Internet and the Web in their early days. As a result, all the software development revolving around the Blockchain technology is growing at a staggering rate. The feeling of many software engineers about such huge interest in Blockchain technologies is that of unruled and hurried software development, a sort of competition on a first-come-first-served basis which does not assure neither software quality, nor that the basic concepts of software engineering are taken into account. This paper tries to cope with this issue, proposing a software development process to gather the requirement, analyze, design, develop, test and deploy Blockchain applications. The process is based on several Agile practices, such as User Stories and iterative and incremental development based on them. However, it makes also use of more formal notations, such as some UML diagrams describing the design of the system, with additions to represent specific concepts found in Blockchain development. The method is described in good detail, and an example is given to show how it works.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Optimizing Prediction Intervals by Tuning Random Forest via Meta-Validation, Abstract: Recent studies have shown that tuning prediction models increases prediction accuracy and that Random Forest can be used to construct prediction intervals. However, to our best knowledge, no study has investigated the need to, and the manner in which one can, tune Random Forest for optimizing prediction intervals { this paper aims to fill this gap. We explore a tuning approach that combines an effectively exhaustive search with a validation technique on a single Random Forest parameter. This paper investigates which, out of eight validation techniques, are beneficial for tuning, i.e., which automatically choose a Random Forest configuration constructing prediction intervals that are reliable and with a smaller width than the default configuration. Additionally, we present and validate three meta-validation techniques to determine which are beneficial, i.e., those which automatically chose a beneficial validation technique. This study uses data from our industrial partner (Keymind Inc.) and the Tukutuku Research Project, related to post-release defect prediction and Web application effort estimation, respectively. Results from our study indicate that: i) the default configuration is frequently unreliable, ii) most of the validation techniques, including previously successfully adopted ones such as 50/50 holdout and bootstrap, are counterproductive in most of the cases, and iii) the 75/25 holdout meta-validation technique is always beneficial; i.e., it avoids the likely counterproductive effects of validation techniques.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: The CMS HGCAL detector for HL-LHC upgrade, Abstract: The High Luminosity LHC (HL-LHC) will integrate 10 times more luminosity than the LHC, posing significant challenges for radiation tolerance and event pileup on detectors, especially for forward calorimetry, and hallmarks the issue for future colliders. As part of its HL-LHC upgrade program, the CMS collaboration is designing a High Granularity Calorimeter to replace the existing endcap calorimeters. It features unprecedented transverse and longitudinal segmentation for both electromagnetic (ECAL) and hadronic (HCAL) compartments. This will facilitate particle-flow calorimetry, where the fine structure of showers can be measured and used to enhance pileup rejection and particle identification, whilst still achieving good energy resolution. The ECAL and a large fraction of HCAL will be based on hexagonal silicon sensors of 0.5-1cm$^{2}$ cell size, with the remainder of the HCAL based on highly-segmented scintillators with SiPM readout. The intrinsic high-precision timing capabilities of the silicon sensors will add an extra dimension to event reconstruction, especially in terms of pileup rejection. An overview of the HGCAL project is presented, covering motivation, engineering design, readout and trigger concepts, and performance (simulated and from beam tests).
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Targeted Damage to Interdependent Networks, Abstract: The giant mutually connected component (GMCC) of an interdependent or multiplex network collapses with a discontinuous hybrid transition under random damage to the network. If the nodes to be damaged are selected in a targeted way, the collapse of the GMCC may occur significantly sooner. Finding the minimal damage set which destroys the largest mutually connected component of a given interdependent network is a computationally prohibitive simultaneous optimization problem. We introduce a simple heuristic strategy -- Effective Multiplex Degree -- for targeted attack on interdependent networks that leverages the indirect damage inherent in multiplex networks to achieve a damage set smaller than that found by any other non computationally intensive algorithm. We show that the intuition from single layer networks that decycling (damage of the $2$-core) is the most effective way to destroy the giant component, does not carry over to interdependent networks, and in fact such approaches are worse than simply removing the highest degree nodes.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Physics", "Mathematics" ]
Title: High-accuracy phase-field models for brittle fracture based on a new family of degradation functions, Abstract: Phase-field approaches to fracture based on energy minimization principles have been rapidly gaining popularity in recent years, and are particularly well-suited for simulating crack initiation and growth in complex fracture networks. In the phase-field framework, the surface energy associated with crack formation is calculated by evaluating a functional defined in terms of a scalar order parameter and its gradients, which in turn describe the fractures in a diffuse sense following a prescribed regularization length scale. Imposing stationarity of the total energy leads to a coupled system of partial differential equations, one enforcing stress equilibrium and another governing phase-field evolution. The two equations are coupled through an energy degradation function that models the loss of stiffness in the bulk material as it undergoes damage. In the present work, we introduce a new parametric family of degradation functions aimed at increasing the accuracy of phase-field models in predicting critical loads associated with crack nucleation as well as the propagation of existing fractures. An additional goal is the preservation of linear elastic response in the bulk material prior to fracture. Through the analysis of several numerical examples, we demonstrate the superiority of the proposed family of functions to the classical quadratic degradation function that is used most often in the literature.
[ 0, 1, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Straggler Mitigation in Distributed Optimization Through Data Encoding, Abstract: Slow running or straggler tasks can significantly reduce computation speed in distributed computation. Recently, coding-theory-inspired approaches have been applied to mitigate the effect of straggling, through embedding redundancy in certain linear computational steps of the optimization algorithm, thus completing the computation without waiting for the stragglers. In this paper, we propose an alternate approach where we embed the redundancy directly in the data itself, and allow the computation to proceed completely oblivious to encoding. We propose several encoding schemes, and demonstrate that popular batch algorithms, such as gradient descent and L-BFGS, applied in a coding-oblivious manner, deterministically achieve sample path linear convergence to an approximate solution of the original problem, using an arbitrarily varying subset of the nodes at each iteration. Moreover, this approximation can be controlled by the amount of redundancy and the number of nodes used in each iteration. We provide experimental results demonstrating the advantage of the approach over uncoded and data replication strategies.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Inference Trees: Adaptive Inference with Exploration, Abstract: We introduce inference trees (ITs), a new class of inference methods that build on ideas from Monte Carlo tree search to perform adaptive sampling in a manner that balances exploration with exploitation, ensures consistency, and alleviates pathologies in existing adaptive methods. ITs adaptively sample from hierarchical partitions of the parameter space, while simultaneously learning these partitions in an online manner. This enables ITs to not only identify regions of high posterior mass, but also maintain uncertainty estimates to track regions where significant posterior mass may have been missed. ITs can be based on any inference method that provides a consistent estimate of the marginal likelihood. They are particularly effective when combined with sequential Monte Carlo, where they capture long-range dependencies and yield improvements beyond proposal adaptation alone.
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Computer Science" ]
Title: Faster Fuzzing: Reinitialization with Deep Neural Models, Abstract: We improve the performance of the American Fuzzy Lop (AFL) fuzz testing framework by using Generative Adversarial Network (GAN) models to reinitialize the system with novel seed files. We assess performance based on the temporal rate at which we produce novel and unseen code paths. We compare this approach to seed file generation from a random draw of bytes observed in the training seed files. The code path lengths and variations were not sufficiently diverse to fully replace AFL input generation. However, augmenting native AFL with these additional code paths demonstrated improvements over AFL alone. Specifically, experiments showed the GAN was faster and more effective than the LSTM and out-performed a random augmentation strategy, as measured by the number of unique code paths discovered. GAN helps AFL discover 14.23% more code paths than the random strategy in the same amount of CPU time, finds 6.16% more unique code paths, and finds paths that are on average 13.84% longer. Using GAN shows promise as a reinitialization strategy for AFL to help the fuzzer exercise deep paths in software.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Second Order Analysis for Joint Source-Channel Coding with Markovian Source, Abstract: We derive the second order rates of joint source-channel coding, whose source obeys an irreducible and ergodic Markov process when the channel is a discrete memoryless, while a previous study solved it only in a special case. We also compare the joint source-channel scheme with the separation scheme in the second order regime while a previous study made a notable comparison only with numerical calculation. To make these two notable progress, we introduce two kinds of new distribution families, switched Gaussian convolution distribution and *-product distribution, which are defined by modifying the Gaussian distribution.
[ 1, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics", "Statistics" ]
Title: Implicit Weight Uncertainty in Neural Networks, Abstract: Modern neural networks tend to be overconfident on unseen, noisy or incorrectly labelled data and do not produce meaningful uncertainty measures. Bayesian deep learning aims to address this shortcoming with variational approximations (such as Bayes by Backprop or Multiplicative Normalising Flows). However, current approaches have limitations regarding flexibility and scalability. We introduce Bayes by Hypernet (BbH), a new method of variational approximation that interprets hypernetworks as implicit distributions. It naturally uses neural networks to model arbitrarily complex distributions and scales to modern deep learning architectures. In our experiments, we demonstrate that our method achieves competitive accuracies and predictive uncertainties on MNIST and a CIFAR5 task, while being the most robust against adversarial attacks.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: A systematic analysis of the XMM-Newton background: III. Impact of the magnetospheric environment, Abstract: A detailed characterization of the particle induced background is fundamental for many of the scientific objectives of the Athena X-ray telescope, thus an adequate knowledge of the background that will be encountered by Athena is desirable. Current X-ray telescopes have shown that the intensity of the particle induced background can be highly variable. Different regions of the magnetosphere can have very different environmental conditions, which can, in principle, differently affect the particle induced background detected by the instruments. We present results concerning the influence of the magnetospheric environment on the background detected by EPIC instrument onboard XMM-Newton through the estimate of the variation of the in-Field-of-View background excess along the XMM-Newton orbit. An important contribution to the XMM background, which may affect the Athena background as well, comes from soft proton flares. Along with the flaring component a low-intensity component is also present. We find that both show modest variations in the different magnetozones and that the soft proton component shows a strong trend with the distance from Earth.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: DeepPermNet: Visual Permutation Learning, Abstract: We present a principled approach to uncover the structure of visual data by solving a novel deep learning task coined visual permutation learning. The goal of this task is to find the permutation that recovers the structure of data from shuffled versions of it. In the case of natural images, this task boils down to recovering the original image from patches shuffled by an unknown permutation matrix. Unfortunately, permutation matrices are discrete, thereby posing difficulties for gradient-based methods. To this end, we resort to a continuous approximation of these matrices using doubly-stochastic matrices which we generate from standard CNN predictions using Sinkhorn iterations. Unrolling these iterations in a Sinkhorn network layer, we propose DeepPermNet, an end-to-end CNN model for this task. The utility of DeepPermNet is demonstrated on two challenging computer vision problems, namely, (i) relative attributes learning and (ii) self-supervised representation learning. Our results show state-of-the-art performance on the Public Figures and OSR benchmarks for (i) and on the classification and segmentation tasks on the PASCAL VOC dataset for (ii).
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: ADE String Chains and Mirror Symmetry, Abstract: 6d superconformal field theories (SCFTs) are the SCFTs in the highest possible dimension. They can be geometrically engineered in F-theory by compactifying on non-compact elliptic Calabi-Yau manifolds. In this paper we focus on the class of SCFTs whose base geometry is determined by $-2$ curves intersecting according to ADE Dynkin diagrams and derive the corresponding mirror Calabi-Yau manifold. The mirror geometry is uniquely determined in terms of the mirror curve which has also an interpretation in terms of the Seiberg-Witten curve of the four-dimensional theory arising from torus compactification. Adding the affine node of the ADE quiver to the base geometry, we connect to recent results on SYZ mirror symmetry for the $A$ case and provide a physical interpretation in terms of little string theory. Our results, however, go beyond this case as our construction naturally covers the $D$ and $E$ cases as well.
[ 0, 0, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Asymptotic efficiency of the proportional compensation scheme for a large number of producers, Abstract: We consider a manager, who allocates some fixed total payment amount between $N$ rational agents in order to maximize the aggregate production. The profit of $i$-th agent is the difference between the compensation (reward) obtained from the manager and the production cost. We compare (i) the \emph{normative} compensation scheme, where the manager enforces the agents to follow an optimal cooperative strategy; (ii) the \emph{linear piece rates} compensation scheme, where the manager announces an optimal reward per unit good; (iii) the \emph{proportional} compensation scheme, where agent's reward is proportional to his contribution to the total output. Denoting the correspondent total production levels by $s^*$, $\hat s$ and $\overline s$ respectively, where the last one is related to the unique Nash equilibrium, we examine the limits of the prices of anarchy $\mathscr A_N=s^*/\overline s$, $\mathscr A_N'=\hat s/\overline s$ as $N\to\infty$. These limits are calculated for the cases of identical convex costs with power asymptotics at the origin, and for power costs, corresponding to the Coob-Douglas and generalized CES production functions with decreasing returns to scale. Our results show that asymptotically no performance is lost in terms of $\mathscr A'_N$, and in terms of $\mathscr A_N$ the loss does not exceed $31\%$.
[ 1, 0, 0, 0, 0, 0 ]
[ "Mathematics", "Quantitative Finance" ]
Title: Non-equilibrium statistical mechanics of continuous attractors, Abstract: Continuous attractors have been used to understand recent neuroscience experiments where persistent activity patterns encode internal representations of external attributes like head direction or spatial location. However, the conditions under which the emergent bump of neural activity in such networks can be manipulated by space and time-dependent external sensory or motor signals are not understood. Here, we find fundamental limits on how rapidly internal representations encoded along continuous attractors can be updated by an external signal. We apply these results to place cell networks to derive a velocity-dependent non-equilibrium memory capacity in neural networks.
[ 0, 0, 0, 0, 1, 0 ]
[ "Physics", "Quantitative Biology" ]
Title: Exponentiated Generalized Pareto Distribution: Properties and applications towards Extreme Value Theory, Abstract: The Generalized Pareto Distribution (GPD) plays a central role in modelling heavy tail phenomena in many applications. Applying the GPD to actual datasets however is a non-trivial task. One common way suggested in the literature to investigate the tail behaviour is to take logarithm to the original dataset in order to reduce the sample variability. Inspired by this, we propose and study the Exponentiated Generalized Pareto Distribution (exGPD), which is created via log-transform of the GPD variable. After introducing the exGPD we derive various distributional quantities, including the moment generating function, tail risk measures. As an application we also develop a plot as an alternative to the Hill plot to identify the tail index of heavy tailed datasets, based on the moment matching for the exGPD. Various numerical analyses with both simulated and actual datasets show that the proposed plot works well.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Quantitative Finance" ]
Title: Reflexive polytopes arising from perfect graphs, Abstract: Reflexive polytopes form one of the distinguished classes of lattice polytopes. Especially reflexive polytopes which possess the integer decomposition property are of interest. In the present paper, by virtue of the algebraic technique on Grönbner bases, a new class of reflexive polytopes which possess the integer decomposition property and which arise from perfect graphs will be presented. Furthermore, the Ehrhart $\delta$-polynomials of these polytopes will be studied.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: Meta Networks, Abstract: Neural networks have been successfully applied in applications with a large amount of labeled data. However, the task of rapid generalization on new concepts with small training data while preserving performances on previously learned ones still presents a significant challenge to neural network models. In this work, we introduce a novel meta learning method, Meta Networks (MetaNet), that learns a meta-level knowledge across tasks and shifts its inductive biases via fast parameterization for rapid generalization. When evaluated on Omniglot and Mini-ImageNet benchmarks, our MetaNet models achieve a near human-level performance and outperform the baseline approaches by up to 6% accuracy. We demonstrate several appealing properties of MetaNet relating to generalization and continual learning.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Analysing Magnetism Using Scanning SQUID Microscopy, Abstract: Scanning superconducting quantum interference device microscopy (SSM) is a scanning probe technique that images local magnetic flux, which allows for mapping of magnetic fields with high field and spatial accuracy. Many studies involving SSM have been published in the last decades, using SSM to make qualitative statements about magnetism. However, quantitative analysis using SSM has received less attention. In this work, we discuss several aspects of interpreting SSM images and methods to improve quantitative analysis. First, we analyse the spatial resolution and how it depends on several factors. Second, we discuss the analysis of SSM scans and the information obtained from the SSM data. Using simulations, we show how signals evolve as a function of changing scan height, SQUID loop size, magnetization strength and orientation. We also investigated 2-dimensional autocorrelation analysis to extract information about the size, shape and symmetry of magnetic features. Finally, we provide an outlook on possible future applications and improvements.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Algorithms for solving optimization problems arising from deep neural net models: nonsmooth problems, Abstract: Machine Learning models incorporating multiple layered learning networks have been seen to provide effective models for various classification problems. The resulting optimization problem to solve for the optimal vector minimizing the empirical risk is, however, highly nonconvex. This alone presents a challenge to application and development of appropriate optimization algorithms for solving the problem. However, in addition, there are a number of interesting problems for which the objective function is non- smooth and nonseparable. In this paper, we summarize the primary challenges involved, the state of the art, and present some numerical results on an interesting and representative class of problems.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Are Saddles Good Enough for Deep Learning?, Abstract: Recent years have seen a growing interest in understanding deep neural networks from an optimization perspective. It is understood now that converging to low-cost local minima is sufficient for such models to become effective in practice. However, in this work, we propose a new hypothesis based on recent theoretical findings and empirical studies that deep neural network models actually converge to saddle points with high degeneracy. Our findings from this work are new, and can have a significant impact on the development of gradient descent based methods for training deep networks. We validated our hypotheses using an extensive experimental evaluation on standard datasets such as MNIST and CIFAR-10, and also showed that recent efforts that attempt to escape saddles finally converge to saddles with high degeneracy, which we define as `good saddles'. We also verified the famous Wigner's Semicircle Law in our experimental results.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Monotonicity and enclosure methods for the p-Laplace equation, Abstract: We show that the convex hull of a monotone perturbation of a homogeneous background conductivity in the $p$-conductivity equation is determined by knowledge of the nonlinear Dirichlet-Neumann operator. We give two independent proofs, one of which is based on the monotonicity method and the other on the enclosure method. Our results are constructive and require no jump or smoothness properties on the conductivity perturbation or its support.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Warm dark matter and the ionization history of the Universe, Abstract: In warm dark matter scenarios structure formation is suppressed on small scales with respect to the cold dark matter case, reducing the number of low-mass halos and the fraction of ionized gas at high redshifts and thus, delaying reionization. This has an impact on the ionization history of the Universe and measurements of the optical depth to reionization, of the evolution of the global fraction of ionized gas and of the thermal history of the intergalactic medium, can be used to set constraints on the mass of the dark matter particle. However, the suppression of the fraction of ionized medium in these scenarios can be partly compensated by varying other parameters, as the ionization efficiency or the minimum mass for which halos can host star-forming galaxies. Here we use different data sets regarding the ionization and thermal histories of the Universe and, taking into account the degeneracies from several astrophysical parameters, we obtain a lower bound on the mass of thermal warm dark matter candidates of $m_X > 1.3$ keV, or $m_s > 5.5$ keV for the case of sterile neutrinos non-resonantly produced in the early Universe, both at 90\% confidence level.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Astrophysics" ]
Title: Tetramer Bound States in Heteronuclear Systems, Abstract: We calculate the universal spectrum of trimer and tetramer states in heteronuclear mixtures of ultracold atoms with different masses in the vicinity of the heavy-light dimer threshold. To extract the energies, we solve the three- and four-body problem for simple two- and three-body potentials tuned to the universal region using the Gaussian expansion method. We focus on the case of one light particle of mass $m$ and two or three heavy bosons of mass $M$ with resonant heavy-light interactions. We find that trimer and tetramer cross into the heavy-light dimer threshold at almost the same point and that as the mass ratio $M/m$ decreases, the distance between the thresholds for trimer and tetramer states becomes smaller. We also comment on the possibility of observing exotic three-body states consisting of a dimer and two atoms in this region and compare with previous work.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Mutual Kernel Matrix Completion, Abstract: With the huge influx of various data nowadays, extracting knowledge from them has become an interesting but tedious task among data scientists, particularly when the data come in heterogeneous form and have missing information. Many data completion techniques had been introduced, especially in the advent of kernel methods. However, among the many data completion techniques available in the literature, studies about mutually completing several incomplete kernel matrices have not been given much attention yet. In this paper, we present a new method, called Mutual Kernel Matrix Completion (MKMC) algorithm, that tackles this problem of mutually inferring the missing entries of multiple kernel matrices by combining the notions of data fusion and kernel matrix completion, applied on biological data sets to be used for classification task. We first introduced an objective function that will be minimized by exploiting the EM algorithm, which in turn results to an estimate of the missing entries of the kernel matrices involved. The completed kernel matrices are then combined to produce a model matrix that can be used to further improve the obtained estimates. An interesting result of our study is that the E-step and the M-step are given in closed form, which makes our algorithm efficient in terms of time and memory. After completion, the (completed) kernel matrices are then used to train an SVM classifier to test how well the relationships among the entries are preserved. Our empirical results show that the proposed algorithm bested the traditional completion techniques in preserving the relationships among the data points, and in accurately recovering the missing kernel matrix entries. By far, MKMC offers a promising solution to the problem of mutual estimation of a number of relevant incomplete kernel matrices.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics", "Quantitative Biology" ]
Title: Medical Image Synthesis for Data Augmentation and Anonymization using Generative Adversarial Networks, Abstract: Data diversity is critical to success when training deep learning models. Medical imaging data sets are often imbalanced as pathologic findings are generally rare, which introduces significant challenges when training deep learning models. In this work, we propose a method to generate synthetic abnormal MRI images with brain tumors by training a generative adversarial network using two publicly available data sets of brain MRI. We demonstrate two unique benefits that the synthetic images provide. First, we illustrate improved performance on tumor segmentation by leveraging the synthetic images as a form of data augmentation. Second, we demonstrate the value of generative models as an anonymization tool, achieving comparable tumor segmentation results when trained on the synthetic data versus when trained on real subject data. Together, these results offer a potential solution to two of the largest challenges facing machine learning in medical imaging, namely the small incidence of pathological findings, and the restrictions around sharing of patient data.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: Adaptive Feature Representation for Visual Tracking, Abstract: Robust feature representation plays significant role in visual tracking. However, it remains a challenging issue, since many factors may affect the experimental performance. The existing method which combine different features by setting them equally with the fixed weight could hardly solve the issues, due to the different statistical properties of different features across various of scenarios and attributes. In this paper, by exploiting the internal relationship among these features, we develop a robust method to construct a more stable feature representation. More specifically, we utilize a co-training paradigm to formulate the intrinsic complementary information of multi-feature template into the efficient correlation filter framework. We test our approach on challenging se- quences with illumination variation, scale variation, deformation etc. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods favorably.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: The Remarkable Similarity of Massive Galaxy Clusters From z~0 to z~1.9, Abstract: We present the results of a Chandra X-ray survey of the 8 most massive galaxy clusters at z>1.2 in the South Pole Telescope 2500 deg^2 survey. We combine this sample with previously-published Chandra observations of 49 massive X-ray-selected clusters at 0<z<0.1 and 90 SZ-selected clusters at 0.25<z<1.2 to constrain the evolution of the intracluster medium (ICM) over the past ~10 Gyr. We find that the bulk of the ICM has evolved self similarly over the full redshift range probed here, with the ICM density at r>0.2R500 scaling like E(z)^2. In the centers of clusters (r<0.1R500), we find significant deviations from self similarity (n_e ~ E(z)^{0.1+/-0.5}), consistent with no redshift dependence. When we isolate clusters with over-dense cores (i.e., cool cores), we find that the average over-density profile has not evolved with redshift -- that is, cool cores have not changed in size, density, or total mass over the past ~9-10 Gyr. We show that the evolving "cuspiness" of clusters in the X-ray, reported by several previous studies, can be understood in the context of a cool core with fixed properties embedded in a self similarly-evolving cluster. We find no measurable evolution in the X-ray morphology of massive clusters, seemingly in tension with the rapidly-rising (with redshift) rate of major mergers predicted by cosmological simulations. We show that these two results can be brought into agreement if we assume that the relaxation time after a merger is proportional to the crossing time, since the latter is proportional to H(z)^(-1).
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Adaptive Similar Triangles Method: a Stable Alternative to Sinkhorn's Algorithm for Regularized Optimal Transport, Abstract: In this paper, we are motivated by two important applications: entropy-regularized optimal transport problem and road or IP traffic demand matrix estimation by entropy model. Both of them include solving a special type of optimization problem with linear equality constraints and objective given as a sum of an entropy regularizer and a linear function. It is known that the state-of-the-art solvers for this problem, which are based on Sinkhorn's method (also known as RSA or balancing method), can fail to work, when the entropy-regularization parameter is small. We consider the above optimization problem as a particular instance of a general strongly convex optimization problem with linear constraints. We propose a new algorithm to solve this general class of problems. Our approach is based on the transition to the dual problem. First, we introduce a new accelerated gradient method with adaptive choice of gradient's Lipschitz constant. Then, we apply this method to the dual problem and show, how to reconstruct an approximate solution to the primal problem with provable convergence rate. We prove the rate $O(1/k^2)$, $k$ being the iteration counter, both for the absolute value of the primal objective residual and constraints infeasibility. Our method has similar to Sinkhorn's method complexity of each iteration, but is faster and more stable numerically, when the regularization parameter is small. We illustrate the advantage of our method by numerical experiments for the two mentioned applications. We show that there exists a threshold, such that, when the regularization parameter is smaller than this threshold, our method outperforms the Sinkhorn's method in terms of computation time.
[ 0, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Maximum genus of the Jenga like configurations, Abstract: We treat the boundary of the union of blocks in the Jenga game as a surface with a polyhedral structure and consider its genus. We generalize the game and determine the maximum genus of the generalized game.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: A Decidable Very Expressive Description Logic for Databases (Extended Version), Abstract: We introduce $\mathcal{DLR}^+$, an extension of the n-ary propositionally closed description logic $\mathcal{DLR}$ to deal with attribute-labelled tuples (generalising the positional notation), projections of relations, and global and local objectification of relations, able to express inclusion, functional, key, and external uniqueness dependencies. The logic is equipped with both TBox and ABox axioms. We show how a simple syntactic restriction on the appearance of projections sharing common attributes in a $\mathcal{DLR}^+$ knowledge base makes reasoning in the language decidable with the same computational complexity as $\mathcal{DLR}$. The obtained $\mathcal{DLR}^\pm$ n-ary description logic is able to encode more thoroughly conceptual data models such as EER, UML, and ORM.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Centrality measures for graphons: Accounting for uncertainty in networks, Abstract: As relational datasets modeled as graphs keep increasing in size and their data-acquisition is permeated by uncertainty, graph-based analysis techniques can become computationally and conceptually challenging. In particular, node centrality measures rely on the assumption that the graph is perfectly known -- a premise not necessarily fulfilled for large, uncertain networks. Accordingly, centrality measures may fail to faithfully extract the importance of nodes in the presence of uncertainty. To mitigate these problems, we suggest a statistical approach based on graphon theory: we introduce formal definitions of centrality measures for graphons and establish their connections to classical graph centrality measures. A key advantage of this approach is that centrality measures defined at the modeling level of graphons are inherently robust to stochastic variations of specific graph realizations. Using the theory of linear integral operators, we define degree, eigenvector, Katz and PageRank centrality functions for graphons and establish concentration inequalities demonstrating that graphon centrality functions arise naturally as limits of their counterparts defined on sequences of graphs of increasing size. The same concentration inequalities also provide high-probability bounds between the graphon centrality functions and the centrality measures on any sampled graph, thereby establishing a measure of uncertainty of the measured centrality score. The same concentration inequalities also provide high-probability bounds between the graphon centrality functions and the centrality measures on any sampled graph, thereby establishing a measure of uncertainty of the measured centrality score.
[ 1, 0, 1, 1, 0, 0 ]
[ "Computer Science", "Statistics", "Mathematics" ]
Title: A time-periodic mechanical analog of the quantum harmonic oscillator, Abstract: We theoretically investigate the stability and linear oscillatory behavior of a naturally unstable particle whose potential energy is harmonically modulated. We find this fundamental dynamical system is analogous in time to a quantum harmonic oscillator. In a certain modulation limit, a.k.a. the Kapitza regime, the modulated oscillator can behave like an effective classic harmonic oscillator. But in the overlooked opposite limit, the stable modes of vibrations are quantized in the modulation parameter space. By analogy with the statistical interpretation of quantum physics, those modes can be characterized by the time-energy uncertainty relation of a quantum harmonic oscillator. Reducing the almost-periodic vibrational modes of the particle to their periodic eigenfunctions, one can transform the original equation of motion to a dimensionless Schrödinger stationary wave equation with a harmonic potential. This reduction process introduces two features reminiscent of the quantum realm: a wave-particle duality and a loss of causality that could legitimate a statistical interpretation of the computed eigenfunctions. These results shed new light on periodically time-varying linear dynamical systems and open an original path in the recently revived field of quantum mechanical analogs.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Time-resolved polarimetry of the superluminous SN 2015bn with the Nordic Optical Telescope, Abstract: We present imaging polarimetry of the superluminous supernova SN 2015bn, obtained over nine epochs between $-$20 and $+$46 days with the Nordic Optical Telescope. This was a nearby, slowly-evolving Type I superluminous supernova that has been studied extensively and for which two epochs of spectropolarimetry are also available. Based on field stars, we determine the interstellar polarisation in the Galaxy to be negligible. The polarisation of SN 2015bn shows a statistically significant increase during the last epochs, confirming previous findings. Our well-sampled imaging polarimetry series allows us to determine that this increase (from $\sim 0.54\%$ to $\gtrsim 1.10\%$) coincides in time with rapid changes that took place in the optical spectrum. We conclude that the supernova underwent a `phase transition' at around $+$20 days, when the photospheric emission shifted from an outer layer, dominated by natal C and O, to a more aspherical inner core, dominated by freshly nucleosynthesized material. This two-layered model might account for the characteristic appearance and properties of Type I superluminous supernovae.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Solving Non-parametric Inverse Problem in Continuous Markov Random Field using Loopy Belief Propagation, Abstract: In this paper, we address the inverse problem, or the statistical machine learning problem, in Markov random fields with a non-parametric pair-wise energy function with continuous variables. The inverse problem is formulated by maximum likelihood estimation. The exact treatment of maximum likelihood estimation is intractable because of two problems: (1) it includes the evaluation of the partition function and (2) it is formulated in the form of functional optimization. We avoid Problem (1) by using Bethe approximation. Bethe approximation is an approximation technique equivalent to the loopy belief propagation. Problem (2) can be solved by using orthonormal function expansion. Orthonormal function expansion can reduce a functional optimization problem to a function optimization problem. Our method can provide an analytic form of the solution of the inverse problem within the framework of Bethe approximation.
[ 1, 1, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics", "Mathematics" ]
Title: Suspensions of finite-size neutrally-buoyant spheres in turbulent duct flow, Abstract: We study the turbulent square duct flow of dense suspensions of neutrally-buoyant spherical particles. Direct numerical simulations (DNS) are performed in the range of volume fractions $\phi=0-0.2$, using the immersed boundary method (IBM) to account for the dispersed phase. Based on the hydraulic diameter a Reynolds number of $5600$ is considered. We report flow features and particle statistics specific to this geometry, and compare the results to the case of two-dimensional channel flows. In particular, we observe that for $\phi=0.05$ and $0.1$, particles preferentially accumulate on the corner bisectors, close to the duct corners as also observed for laminar square duct flows of same duct-to-particle size ratios. At the highest volume fraction, particles preferentially accumulate in the core region. For channel flows, in the absence of lateral confinement particles are found instead to be uniformily distributed across the channel. We also observe that the intensity of the cross-stream secondary flows increases (with respect to the unladen case) with the volume fraction up to $\phi=0.1$, as a consequence of the high concentration of particles along the corner bisector. For $\phi=0.2$ the turbulence activity is strongly reduced and the intensity of the secondary flows reduces below that of the unladen case. The friction Reynolds number increases with $\phi$ in dilute conditions, as observed for channel flows. However, for $\phi=0.2$ the mean friction Reynolds number decreases below the value for $\phi=0.1$.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Neutron Star Planets: Atmospheric processes and habitability, Abstract: Of the roughly 3000 neutron stars known, only a handful have sub-stellar companions. The most famous of these are the low-mass planets around the millisecond pulsar B1257+12. New evidence indicates that observational biases could still hide a wide variety of planetary systems around most neutron stars. We consider the environment and physical processes relevant to neutron star planets, in particular the effect of X-ray irradiation and the relativistic pulsar wind on the planetary atmosphere. We discuss the survival time of planet atmospheres and the planetary surface conditions around different classes of neutron stars, and define a neutron star habitable zone. Depending on as-yet poorly constrained aspects of the pulsar wind, both Super-Earths around B1257+12 could lie within its habitable zone.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Quantitative Biology" ]
Title: On the number of solutions of some transcendental equations, Abstract: We give upper and lower bounds for the number of solutions of the equation $p(z)\log|z|+q(z)=0$ with polynomials $p$ and $q$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Simultaneous shot inversion for nonuniform geometries using fast data interpolation, Abstract: Stochastic optimization is key to efficient inversion in PDE-constrained optimization. Using 'simultaneous shots', or random superposition of source terms, works very well in simple acquisition geometries where all sources see all receivers, but this rarely occurs in practice. We develop an approach that interpolates data to an ideal acquisition geometry while solving the inverse problem using simultaneous shots. The approach is formulated as a joint inverse problem, combining ideas from low-rank interpolation with full-waveform inversion. Results using synthetic experiments illustrate the flexibility and efficiency of the approach.
[ 0, 0, 0, 1, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Critical neural networks with short and long term plasticity, Abstract: In recent years self organised critical neuronal models have provided insights regarding the origin of the experimentally observed avalanching behaviour of neuronal systems. It has been shown that dynamical synapses, as a form of short-term plasticity, can cause critical neuronal dynamics. Whereas long-term plasticity, such as hebbian or activity dependent plasticity, have a crucial role in shaping the network structure and endowing neural systems with learning abilities. In this work we provide a model which combines both plasticity mechanisms, acting on two different time-scales. The measured avalanche statistics are compatible with experimental results for both the avalanche size and duration distribution with biologically observed percentages of inhibitory neurons. The time-series of neuronal activity exhibits temporal bursts leading to 1/f decay in the power spectrum. The presence of long-term plasticity gives the system the ability to learn binary rules such as XOR, providing the foundation of future research on more complicated tasks such as pattern recognition.
[ 0, 1, 0, 0, 0, 0 ]
[ "Quantitative Biology", "Physics" ]
Title: Harnessing functional segregation across brain rhythms as a means to detect EEG oscillatory multiplexing during music listening, Abstract: Music, being a multifaceted stimulus evolving at multiple timescales, modulates brain function in a manifold way that encompasses not only the distinct stages of auditory perception but also higher cognitive processes like memory and appraisal. Network theory is apparently a promising approach to describe the functional reorganization of brain oscillatory dynamics during music listening. However, the music induced changes have so far been examined within the functional boundaries of isolated brain rhythms. Using naturalistic music, we detected the functional segregation patterns associated with different cortical rhythms, as these were reflected in the surface EEG measurements. The emerged structure was compared across frequency bands to quantify the interplay among rhythms. It was also contrasted against the structure from the rest and noise listening conditions to reveal the specific components stemming from music listening. Our methodology includes an efficient graph-partitioning algorithm, which is further utilized for mining prototypical modular patterns, and a novel algorithmic procedure for identifying switching nodes that consistently change module during music listening. Our results suggest the multiplex character of the music-induced functional reorganization and particularly indicate the dependence between the networks reconstructed from the {\delta} and {\beta}H rhythms. This dependence is further justified within the framework of nested neural oscillations and fits perfectly within the context of recently introduced cortical entrainment to music. Considering its computational efficiency, and in conjunction with the flexibility of in situ electroencephalography, it may lead to novel assistive tools for real-life applications.
[ 0, 0, 0, 0, 1, 0 ]
[ "Quantitative Biology", "Computer Science" ]
Title: Some characterizations of the preimage of $A_{\infty}$ for the Hardy-Littlewood maximal operator and consequences, Abstract: The purpose of this paper is to give some characterizations of the weight functions $w$ such that $Mw$ is in $A_{\infty}$. We show that for those weights to be in $A_{\infty}$ ensures to be in $A_{1}$. We give a criterion in terms of the local maximal functions $m_{\lambda}$ and we present a pair of applications, among them someone similar to the Coifman-Rochberg characterization of $A_{1}$ but using functions of the form $(f^{\#})^{\delta}$ and $(m_{\lambda}u)^{\delta}$ instead of $(Mf)^{\delta}$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Coqatoo: Generating Natural Language Versions of Coq Proofs, Abstract: Due to their numerous advantages, formal proofs and proof assistants, such as Coq, are becoming increasingly popular. However, one disadvantage of using proof assistants is that the resulting proofs can sometimes be hard to read and understand, particularly for less-experienced users. To address this issue, we have implemented a tool capable of generating natural language versions of Coq proofs called Coqatoo, which we present in this paper.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Breakdown of the Chiral Anomaly in Weyl Semimetals in a Strong Magnetic Field, Abstract: The low-energy quasiparticles of Weyl semimetals are a condensed-matter realization of the Weyl fermions introduced in relativistic field theory. Chiral anomaly, the nonconservation of the chiral charge under parallel electric and magnetic fields, is arguably the most important phenomenon of Weyl semimetals and has been explained as an imbalance between the occupancies of the gapless, zeroth Landau levels with opposite chiralities. This widely accepted picture has served as the basis for subsequent studies. Here we report the breakdown of the chiral anomaly in Weyl semimetals in a strong magnetic field based on ab initio calculations. A sizable energy gap that depends sensitively on the direction of the magnetic field may open up due to the mixing of the zeroth Landau levels associated with the opposite-chirality Weyl points that are away from each other in the Brillouin zone. Our study provides a theoretical framework for understanding a wide range of phenomena closely related to the chiral anomaly in topological semimetals, such as magnetotransport, thermoelectric responses, and plasmons, to name a few.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Predicting Auction Price of Vehicle License Plate with Deep Recurrent Neural Network, Abstract: In Chinese societies, superstition is of paramount importance, and vehicle license plates with desirable numbers can fetch very high prices in auctions. Unlike other valuable items, license plates are not allocated an estimated price before auction. I propose that the task of predicting plate prices can be viewed as a natural language processing (NLP) task, as the value depends on the meaning of each individual character on the plate and its semantics. I construct a deep recurrent neural network (RNN) to predict the prices of vehicle license plates in Hong Kong, based on the characters on a plate. I demonstrate the importance of having a deep network and of retraining. Evaluated on 13 years of historical auction prices, the deep RNN outperforms previous models by a significant margin.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Quantitative Finance" ]
Title: An Adaptive, Multivariate Partitioning Algorithm for Global Optimization of Nonconvex Programs, Abstract: In this work, we develop an adaptive, multivariate partitioning algorithm for solving mixed-integer nonlinear programs (MINLP) with multi-linear terms to global optimality. This iterative algorithm primarily exploits the advantages of piecewise polyhedral relaxation approaches via disjunctive formulations to solve MINLPs to global optimality in contrast to the conventional spatial branch-and-bound approaches. In order to maintain relatively small-scale mixed-integer linear programs at every iteration of the algorithm, we adaptively partition the variable domains appearing in the multi-linear terms. We also provide proofs on convergence guarantees of the proposed algorithm to a global solution. Further, we discuss a few algorithmic enhancements based on the sequential bound-tightening procedure as a presolve step, where we observe the importance of solving piecewise relaxations compared to basic convex relaxations to speed-up the convergence of the algorithm to global optimality. We demonstrate the effectiveness of our disjunctive formulations and the algorithm on well-known benchmark problems (including Pooling and Blending instances) from MINLPLib and compare with state-of-the-art global optimization solvers. With this novel approach, we solve several large-scale instances which are, in some cases, intractable by the global optimization solver. We also shrink the best known optimality gap for one of the hard, generalized pooling problem instance.
[ 1, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Distributions and Statistical Power of Optimal Signal-Detection Methods In Finite Cases, Abstract: In big data analysis for detecting rare and weak signals among $n$ features, some grouping-test methods such as Higher Criticism test (HC), Berk-Jones test (B-J), and $\phi$-divergence test share the similar asymptotical optimality when $n \rightarrow \infty$. However, in practical data analysis $n$ is frequently small and moderately large at most. In order to properly apply these optimal tests and wisely choose them for practical studies, it is important to know how to get the p-values and statistical power of them. To address this problem in an even broader context, this paper provides analytical solutions for a general family of goodness-of-fit (GOF) tests, which covers these optimal tests. For any given i.i.d. and continuous distributions of the input test statistics of the $n$ features, both p-value and statistical power of such a GOF test can be calculated. By calculation we compared the finite-sample performances of asymptotically optimal tests under the normal mixture alternative. Results show that HC is the best choice when signals are rare, while B-J is more robust over various signal patterns. In the application to a real genome-wide association study, results illustrate that the p-value calculation works well, and the optimal tests have potentials for detecting novel disease genes with weak genetic effects. The calculations have been implemented in an R package SetTest and published on the CRAN.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics" ]