text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: About small eigenvalues of Witten Laplacian, Abstract: We study the eigenvalues of the semiclassical Witten Laplacian $\Delta_\phi$ associated to a potential $\phi$. We consider the case where the sequence of Arrhenius numbers $S_1\leq \ldots\leq S_n$ associated to $\phi$ is degenerated, that is the preceding inequality are not necessarily strict.
[ 0, 0, 1, 0, 0, 0 ]
Title: Partial control of delay-coordinate maps, Abstract: Delay-coordinate maps have been widely used recently to study nonlinear dynamical systems, where there is only access to the time series of one of their variables. Here, we show how the partial control method can be applied in this kind of framework in order to prevent undesirable situations for the system or even to reduce the variability of the observable time series associated with it. The main advantage of this control method, is that it allows to control delay-coordinate maps even if the control applied is smaller than the external disturbances present in the system. To illustrate how it works, we have applied it to three well-known models in Nonlinear Dynamics with different delays such as the two-dimensional cubic map, the standard map and the three-dimensional hyperchaotic Hénon map. For the first time we show here how hyperchaotic systems can be partially controlled.
[ 0, 1, 0, 0, 0, 0 ]
Title: Detecting hip fractures with radiologist-level performance using deep neural networks, Abstract: We developed an automated deep learning system to detect hip fractures from frontal pelvic x-rays, an important and common radiological task. Our system was trained on a decade of clinical x-rays (~53,000 studies) and can be applied to clinical data, automatically excluding inappropriate and technically unsatisfactory studies. We demonstrate diagnostic performance equivalent to a human radiologist and an area under the ROC curve of 0.994. Translated to clinical practice, such a system has the potential to increase the efficiency of diagnosis, reduce the need for expensive additional testing, expand access to expert level medical image interpretation, and improve overall patient outcomes.
[ 0, 0, 0, 1, 0, 0 ]
Title: Large-Scale Online Semantic Indexing of Biomedical Articles via an Ensemble of Multi-Label Classification Models, Abstract: Background: In this paper we present the approaches and methods employed in order to deal with a large scale multi-label semantic indexing task of biomedical papers. This work was mainly implemented within the context of the BioASQ challenge of 2014. Methods: The main contribution of this work is a multi-label ensemble method that incorporates a McNemar statistical significance test in order to validate the combination of the constituent machine learning algorithms. Some secondary contributions include a study on the temporal aspects of the BioASQ corpus (observations apply also to the BioASQ's super-set, the PubMed articles collection) and the proper adaptation of the algorithms used to deal with this challenging classification task. Results: The ensemble method we developed is compared to other approaches in experimental scenarios with subsets of the BioASQ corpus giving positive results. During the BioASQ 2014 challenge we obtained the first place during the first batch and the third in the two following batches. Our success in the BioASQ challenge proved that a fully automated machine-learning approach, which does not implement any heuristics and rule-based approaches, can be highly competitive and outperform other approaches in similar challenging contexts.
[ 0, 0, 0, 1, 0, 0 ]
Title: Modeling Temporally Evolving and Spatially Globally Dependent Data, Abstract: The last decades have seen an unprecedented increase in the availability of data sets that are inherently global and temporally evolving, from remotely sensed networks to climate model ensembles. This paper provides a view of statistical modeling techniques for space-time processes, where space is the sphere representing our planet. In particular, we make a distintion between (a) second order-based, and (b) practical approaches to model temporally evolving global processes. The former are based on the specification of a class of space-time covariance functions, with space being the two-dimensional sphere. The latter are based on explicit description of the dynamics of the space-time process, i.e., by specifying its evolution as a function of its past history with added spatially dependent noise. We especially focus on approach (a), where the literature has been sparse. We provide new models of space-time covariance functions for random fields defined on spheres cross time. Practical approaches, (b), are also discussed, with special emphasis on models built directly on the sphere, without projecting the spherical coordinate on the plane. We present a case study focused on the analysis of air pollution from the 2015 wildfires in Equatorial Asia, an event which was classified as the year's worst environmental disaster. The paper finishes with a list of the main theoretical and applied research problems in the area, where we expect the statistical community to engage over the next decade.
[ 0, 0, 1, 1, 0, 0 ]
Title: A New Representation of Skeleton Sequences for 3D Action Recognition, Abstract: This paper presents a new method for 3D action recognition with skeleton sequences (i.e., 3D trajectories of human skeleton joints). The proposed method first transforms each skeleton sequence into three clips each consisting of several frames for spatial temporal feature learning using deep neural networks. Each clip is generated from one channel of the cylindrical coordinates of the skeleton sequence. Each frame of the generated clips represents the temporal information of the entire skeleton sequence, and incorporates one particular spatial relationship between the joints. The entire clips include multiple frames with different spatial relationships, which provide useful spatial structural information of the human skeleton. We propose to use deep convolutional neural networks to learn long-term temporal information of the skeleton sequence from the frames of the generated clips, and then use a Multi-Task Learning Network (MTLN) to jointly process all frames of the generated clips in parallel to incorporate spatial structural information for action recognition. Experimental results clearly show the effectiveness of the proposed new representation and feature learning method for 3D action recognition.
[ 1, 0, 0, 0, 0, 0 ]
Title: From Abstract Entities in Mathematics to Superposition States in Quantum Mechanics, Abstract: Given an equivalence relation ~ on a set U, there are two abstract notions of an element of the quotient set U/~. The #1 abstract notion is a set S=[u] of equivalent elements of U (an equivalence class); the #2 notion is an abstract entity u_{S} that is definite on what is common to the elements of the equivalence class S but is otherwise indefinite on the differences between those elements. For instance, the #1 interpretation of a homotopy type is an equivalence class of homotopic spaces, but the #2 interpretation, e.g., as developed in homotopy type theory, is an abstract space (without points) that has the properties that are in common to the spaces in the equivalence class but is otherwise indefinite. In philosophy, the #2 abstract entities might be called paradigm-universals, e.g., `the white thing' as opposed to the #1 abstract notion of "the set of white things" (out of some given collection U). The paper shows how this #2 notion of a paradigm may be mathematically modeled using incidence matrices in Boolean logic and density matrices in probability theory. Then we cross the bridge to the density matrix treatment of the indefinite superposition states in quantum mechanics (QM). This connection between the #2 abstracts in mathematics and ontic indefinite states in QM elucidates Abner Shimony's literal or objective indefiniteness interpretation of QM.
[ 0, 1, 1, 0, 0, 0 ]
Title: A note on effective descent for overconvergent isocrystals, Abstract: In this short note we explain the proof that proper surjective and faithfully flat maps are morphisms of effective descent for overconvergent isocrystals. We then show how to deduce the folklore theorem that for an arbitrary variety over a perfect field of characteristic $p$, the Frobenius pull-back functor is an equivalence on the overconvergent category.
[ 0, 0, 1, 0, 0, 0 ]
Title: Banach strong Novikov conjecture for polynomially contractible groups, Abstract: We prove the Banach strong Novikov conjecture for groups having polynomially bounded higher-order combinatorial functions. This includes all automatic groups.
[ 0, 0, 1, 0, 0, 0 ]
Title: Fabrication of quencher-free liquid scintillator-based, high-activity $^{222}$Rn calibration sources for the Borexino detector, Abstract: A reliable and consistently reproducible technique to fabricate $^{222}$Rn-loaded radioactive sources ($\sim$0.5-1 kBq just after fabrication) based on liquid scintillator (LS), with negligible amounts of LS quencher contaminants, was implemented. This work demonstrates the process that will be used during the Borexino detector's upcoming calibration campaign, with one or several $\sim$100 Bq such sources will be deployed at different positions in its fiducial volume, currently showing unprecedented levels of radiopurity. These sources need to fulfill stringent requirements of $^{222}$Rn activity, transparency to the radiations of interest and complete removability from the detector to ensure their impact on Borexino's radiopurity is negligible. Moreover, the need for a clean, undistorted spectral signal for the calibrations imposes a tight requirement to minimize quenching agents ("quenchers") to null or extremely low levels.
[ 0, 1, 0, 0, 0, 0 ]
Title: Identification of Voice Utterance with Aging Factor Using the Method of MFCC Multichannel, Abstract: This research was conducted to develop a method to identify voice utterance. For voice utterance that encounters change caused by aging factor, with the interval of 10 to 25 years. The change of voice utterance influenced by aging factor might be extracted by MFCC (Mel Frequency Cepstrum Coefficient). However, the level of the compatibility of the feature may be dropped down to 55%. While the ones which do not encounter it may reach 95%. To improve the compatibility of the changing voice feature influenced by aging factor, then the method of the more specific feature extraction is developed: which is by separating the voice into several channels, suggested as MFCC multichannel, consisting of multichannel 5 filterbank (M5FB), multichannel 2 filterbank (M2FB) and multichannel 1 filterbank (M1FB). The result of the test shows that for model M5FB and M2FB have the highest score in the level of compatibility with 85% and 82% with 25 years interval. While model M5FB gets the highest score of 86% for 10 years time interval.
[ 1, 0, 0, 0, 0, 0 ]
Title: Single Letter Expression of Capacity for a Class of Channels with Memory, Abstract: We study finite alphabet channels with Unit Memory on the previous Channel Outputs called UMCO channels. We identify necessary and sufficient conditions, to test whether the capacity achieving channel input distributions with feedback are time-invariant, and whether feedback capacity is characterized by single letter, expressions, similar to that of memoryless channels. The method is based on showing that a certain dynamic programming equation, which in general, is a nested optimization problem over the sequence of channel input distributions, reduces to a non-nested optimization problem. Moreover, for UMCO channels, we give a simple expression for the ML error exponent, and we identify sufficient conditions to test whether feedback does not increase capacity. We derive similar results, when transmission cost constraints are imposed. We apply the results to a special class of the UMCO channels, the Binary State Symmetric Channel (BSSC) with and without transmission cost constraints, to show that the optimization problem of feedback capacity is non-nested, the capacity achieving channel input distribution and the corresponding channel output transition probability distribution are time-invariant, and feedback capacity is characterized by a single letter formulae, precisely as Shannon's single letter characterization of capacity of memoryless channels. Then we derive closed form expressions for the capacity achieving channel input distribution and feedback capacity. We use the closed form expressions to evaluate an error exponent for ML decoding.
[ 1, 0, 1, 0, 0, 0 ]
Title: Spectra of quadratic vector fields on $\mathbb{C}^2$: The missing relation, Abstract: Consider a quadratic vector field on $\mathbb{C}^2$ having an invariant line at infinity and isolated singularities only. We define the extended spectra of singularities to be the collection of the spectra of the linearization matrices of each of the singular points over the affine part, together with all the characteristic numbers (i.e. Camacho-Sad indices) at infinity. This collection consists of 11 complex numbers, and is invariant under affine equivalence of vector fields. In this paper we describe all polynomial relations among these numbers. There are 5 independent polynomial relations; four of them follow from the Euler-Jacobi, the Baum-Bott and the Camacho-Sad index theorems, and are well known. The fifth relation was, until now, completely unknown. We provide an explicit formula for the missing 5th relation, discuss it's meaning and prove that it cannot be formulated as an index theorem.
[ 0, 0, 1, 0, 0, 0 ]
Title: A Controlled Set-Up Experiment to Establish Personalized Baselines for Real-Life Emotion Recognition, Abstract: We design, conduct and present the results of a highly personalized baseline emotion recognition experiment, which aims to set reliable ground-truth estimates for the subject's emotional state for real-life prediction under similar conditions using a small number of physiological sensors. We also propose an adaptive stimuli-selection mechanism that would use the user's feedback as guide for future stimuli selection in the controlled-setup experiment and generate optimal ground-truth personalized sessions systematically. Initial results are very promising (85% accuracy) and variable importance analysis shows that only a few features, which are easy-to-implement in portable devices, would suffice to predict the subject's emotional state.
[ 1, 0, 0, 1, 0, 0 ]
Title: Continuity of the Green function in meromorphic families of polynomials, Abstract: We prove that along any marked point the Green function of a meromorphic family of polynomials parameterized by the punctured unit disk explodes exponentially fast near the origin with a continuous error term.
[ 0, 0, 1, 0, 0, 0 ]
Title: The World's First Real-Time Testbed for Massive MIMO: Design, Implementation, and Validation, Abstract: This paper sets up a framework for designing a massive multiple-input multiple-output (MIMO) testbed by investigating hardware (HW) and system-level requirements such as processing complexity, duplexing mode and frame structure. Taking these into account, a generic system and processing partitioning is proposed which allows flexible scaling and processing distribution onto a multitude of physically separated devices. Based on the given HW constraints such as maximum number of links and maximum throughput for peer-to-peer interconnections combined with processing capabilities, the framework allows to evaluate modular HW components. To verify our design approach, we present the LuMaMi (Lund University Massive MIMO) testbed which constitutes the first reconfigurable real-time HW platform for prototyping massive MIMO. Utilizing up to 100 base station antennas and more than 50 Field Programmable Gate Arrays, up to 12 user equipments are served on the same time/frequency resource using an LTE-like Orthogonal Frequency Division Multiplexing time-division duplex-based transmission scheme. Proof-of-concept tests with this system show that massive MIMO can simultaneously serve a multitude of users in a static indoor and static outdoor environment utilizing the same time/frequency resource.
[ 1, 0, 1, 0, 0, 0 ]
Title: Fuel-Efficient En Route Formation of Truck Platoons, Abstract: The problem of how to coordinate a large fleet of trucks with given itinerary to enable fuel-efficient platooning is considered. Platooning is a promising technology that enables trucks to save significant amounts of fuel by driving close together and thus reducing air drag. A setting is considered in which each truck in a fleet is provided with a start location, a destination, a departure time, and an arrival deadline from a higher planning level. Fuel-efficient plans should be computed. The plans consist of routes and speed profiles that allow trucks to arrive by their arrival deadlines. Hereby, trucks can meet on common parts of their routes and form platoons, resulting in decreased fuel consumption. We formulate a combinatorial optimization problem that combines plans involving only two vehicles. We show that this problem is hard to solve for large problem instances. Hence a heuristic algorithm is proposed. The resulting plans are further optimized using convex optimization techniques. The method is evaluated with Monte Carlo simulations in a realistic setting. We demonstrate that the proposed algorithm can compute plans for thousands of trucks and that significant fuel savings can be achieved.
[ 1, 0, 0, 0, 0, 0 ]
Title: Context-Independent Polyphonic Piano Onset Transcription with an Infinite Training Dataset, Abstract: Many of the recent approaches to polyphonic piano note onset transcription require training a machine learning model on a large piano database. However, such approaches are limited by dataset availability; additional training data is difficult to produce, and proposed systems often perform poorly on novel recording conditions. We propose a method to quickly synthesize arbitrary quantities of training data, avoiding the need for curating large datasets. Various aspects of piano note dynamics - including nonlinearity of note signatures with velocity, different articulations, temporal clustering of onsets, and nonlinear note partial interference - are modeled to match the characteristics of real pianos. Our method also avoids the disentanglement problem, a recently noted issue affecting machine-learning based approaches. We train a feed-forward neural network with two hidden layers on our generated training data and achieve both good transcription performance on the large MAPS piano dataset and excellent generalization qualities.
[ 1, 0, 0, 1, 0, 0 ]
Title: On the Implementation of a Scalable Simulator for Multiscale Hybrid-Mixed Methods, Abstract: The family of Multiscale Hybrid-Mixed (MHM) finite element methods has received considerable attention from the mathematics and engineering community in the last few years. The MHM methods allow solving highly heterogeneous problems on coarse meshes while providing solutions with high-order precision. It embeds independent local problems which are responsible for upscaling unresolved scales into the numerical solution. These local contributions are brought together through a global problem defined on the skeleton of the coarse partition. Since the local problems are completely independent, they can be easily computed in parallel. In this paper, we present two simulator prototypes specifically crafted for the MHM methods, which adopt two different implementation strategies: (i) a multi-programming language approach, each language tackling different simulation issues; and (ii) a classical, single-programming language approach. Specifically, we use C++ for numerical computation of the global and local problems in a modular way; for process distribution in the simulator, we adopt the Erlang concurrent language in the first approach, and the MPI standard in the second approach. The aim of exploring these different approaches is twofold: (i) allow for the deployment of the simulator both in high-performance computing (with MPI) and in cloud computing environments (with Erlang); and (ii) pave the way for further exploration of quality attributes related to software productivity and fault-tolerance, which are key to Exascale systems. We present a performance evaluation of the two simulator prototypes taking into account their efficiency.
[ 1, 0, 1, 0, 0, 0 ]
Title: Deformation theory of the blown-up Seiberg-Witten equation in dimension three, Abstract: Associated with every quaternionic representation of a compact, connected Lie group there is a Seiberg-Witten equation in dimension three. The moduli spaces of solutions to these equations are typically non-compact. We construct Kuranishi models around boundary points of a partially compactified moduli space. The Haydys correspondence identifies such boundary points with Fueter sections - solutions of a non-linear Dirac equation - of the bundle of hyperkähler quotients associated with the quaternionic representation. We discuss when such a Fueter section can be deformed to a solution of the Seiberg-Witten equation.
[ 0, 0, 1, 0, 0, 0 ]
Title: On a Fractional Stochastic Hodgkin-Huxley Model, Abstract: The model studied in this paper is a stochastic extension of the so-called neuron model introduced by Hodgkin and Huxley. In the sense of rough paths, the model is perturbed by a multiplicative noise driven by a fractional Brownian motion, with a vector field satisfying the viability condition of Coutin and Marie for $\mathbb R\times [0,1]^3$. An application to the modeling of the membrane potential of nerve fibers damaged by a neuropathy is provided.
[ 0, 0, 1, 0, 0, 0 ]
Title: Approximations and Bounds for (n, k) Fork-Join Queues: A Linear Transformation Approach, Abstract: Compared to basic fork-join queues, a job in (n, k) fork-join queues only needs its k out of all n sub-tasks to be finished. Since (n, k) fork-join queues are prevalent in popular distributed systems, erasure coding based cloud storages, and modern network protocols like multipath routing, estimating the sojourn time of such queues is thus critical for the performance measurement and resource plan of computer clusters. However, the estimating keeps to be a well-known open challenge for years, and only rough bounds for a limited range of load factors have been given. In this paper, we developed a closed-form linear transformation technique for jointly-identical random variables: An order statistic can be represented by a linear combination of maxima. This brand-new technique is then used to transform the sojourn time of non-purging (n, k) fork-join queues into a linear combination of the sojourn times of basic (k, k), (k+1, k+1), ..., (n, n) fork-join queues. Consequently, existing approximations for basic fork-join queues can be bridged to the approximations for non-purging (n, k) fork-join queues. The uncovered approximations are then used to improve the upper bounds for purging (n, k) fork-join queues. Simulation experiments show that this linear transformation approach is practiced well for moderate n and relatively large k.
[ 1, 0, 0, 1, 0, 0 ]
Title: Integrated analysis of energy transfers in elastic-wave turbulence, Abstract: In elastic-wave turbulence, strong turbulence appears in small wave numbers while weak turbulence does in large wave numbers. Energy transfers in the coexistence of these turbulent states are numerically investigated in both of the Fourier space and the real space. An analytical expression of a detailed energy balance reveals from which mode to which mode energy is transferred in the triad interaction. Stretching energy excited by external force is transferred nonlocally and intermittently to large wave numbers as the kinetic energy in the strong turbulence. In the weak turbulence, the resonant interactions according to the weak turbulence theory produces cascading net energy transfer to large wave numbers. Because the system's nonlinearity shows strong temporal intermittency, the energy transfers are investigated at active and moderate phases separately. The nonlocal interactions in the Fourier space are characterized by the intermittent bundles of fibrous structures in the real space.
[ 0, 1, 0, 0, 0, 0 ]
Title: Effect of annealing on the magnetic properties of zinc ferrite thin films, Abstract: We report on the magnetic properties of zinc ferrite thin film deposited on SrTiO$_3$ single crystal using pulsed laser deposition. X-ray diffraction result indicates the highly oriented single phase growth of the film along with the presence of the strain. In comparison to the bulk antiferromagnetic order, the as-deposited film has been found to exhibit ferrimagnetic ordering with a coercive field of 1140~Oe at 5~K. A broad maximum, at $\approx$105~K, observed in zero-field cooled magnetization curve indicates the wide grain size distribution for the as-deposited film. Reduction in magnetization and blocking temperature has been observed after annealing in both argon as well as oxygen atmospheres, where the variation was found to be dependent on the annealing temperature.
[ 0, 1, 0, 0, 0, 0 ]
Title: Statistical Speech Model Description with VMF Mixture Model, Abstract: In this paper, we present the LSF parameters by a unit vector form, which has directional characteristics. The underlying distribution of this unit vector variable is modeled by a von Mises-Fisher mixture model (VMM). With the high rate theory, the optimal inter-component bit allocation strategy is proposed and the distortion-rate (D-R) relation is derived for the VMM based-VQ (VVQ). Experimental results show that the VVQ outperforms our recently introduced DVQ and the conventional GVQ.
[ 1, 0, 0, 0, 0, 0 ]
Title: Predicting Atomic Decay Rates Using an Informational-Entropic Approach, Abstract: We show that a newly proposed Shannon-like entropic measure of shape complexity applicable to spatially-localized or periodic mathematical functions known as configurational entropy (CE) can be used as a predictor of spontaneous decay rates for one-electron atoms. The CE is constructed from the Fourier transform of the atomic probability density. For the hydrogen atom with degenerate states labeled with the principal quantum number n, we obtain a scaling law relating the n-averaged decay rates to the respective CE. The scaling law allows us to predict the n-averaged decay rate without relying on the traditional computation of dipole matrix elements. We tested the predictive power of our approach up to n=20, obtaining an accuracy better than 3.7% within our numerical precision, as compared to spontaneous decay tables listed in the literature.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Fast Numerical Scheme for the Godunov-Peshkov-Romenski Model of Continuum Mechanics, Abstract: A new second-order numerical scheme based on an operator splitting is proposed for the Godunov-Peshkov-Romenski model of continuum mechanics. The homogeneous part of the system is solved with a finite volume method based on a WENO reconstruction, and the temporal ODEs are solved using some analytic results presented here. Whilst it is not possible to attain arbitrary-order accuracy with this scheme (as with ADER-WENO schemes used previously), the attainable order of accuracy is often sufficient, and solutions are computationally cheap when compared with other available schemes. The new scheme is compared with an ADER-WENO scheme for various test cases, and a convergence study is undertaken to demonstrate its order of accuracy.
[ 0, 1, 1, 0, 0, 0 ]
Title: Explaining Transition Systems through Program Induction, Abstract: Explaining and reasoning about processes which underlie observed black-box phenomena enables the discovery of causal mechanisms, derivation of suitable abstract representations and the formulation of more robust predictions. We propose to learn high level functional programs in order to represent abstract models which capture the invariant structure in the observed data. We introduce the $\pi$-machine (program-induction machine) -- an architecture able to induce interpretable LISP-like programs from observed data traces. We propose an optimisation procedure for program learning based on backpropagation, gradient descent and A* search. We apply the proposed method to three problems: system identification of dynamical systems, explaining the behaviour of a DQN agent and learning by demonstration in a human-robot interaction scenario. Our experimental results show that the $\pi$-machine can efficiently induce interpretable programs from individual data traces.
[ 1, 0, 0, 0, 0, 0 ]
Title: Approximate Gradient Coding via Sparse Random Graphs, Abstract: Distributed algorithms are often beset by the straggler effect, where the slowest compute nodes in the system dictate the overall running time. Coding-theoretic techniques have been recently proposed to mitigate stragglers via algorithmic redundancy. Prior work in coded computation and gradient coding has mainly focused on exact recovery of the desired output. However, slightly inexact solutions can be acceptable in applications that are robust to noise, such as model training via gradient-based algorithms. In this work, we present computationally simple gradient codes based on sparse graphs that guarantee fast and approximately accurate distributed computation. We demonstrate that sacrificing a small amount of accuracy can significantly increase algorithmic robustness to stragglers.
[ 1, 0, 0, 1, 0, 0 ]
Title: Word Embeddings via Tensor Factorization, Abstract: Most popular word embedding techniques involve implicit or explicit factorization of a word co-occurrence based matrix into low rank factors. In this paper, we aim to generalize this trend by using numerical methods to factor higher-order word co-occurrence based arrays, or \textit{tensors}. We present four word embeddings using tensor factorization and analyze their advantages and disadvantages. One of our main contributions is a novel joint symmetric tensor factorization technique related to the idea of coupled tensor factorization. We show that embeddings based on tensor factorization can be used to discern the various meanings of polysemous words without being explicitly trained to do so, and motivate the intuition behind why this works in a way that doesn't with existing methods. We also modify an existing word embedding evaluation metric known as Outlier Detection [Camacho-Collados and Navigli, 2016] to evaluate the quality of the order-$N$ relations that a word embedding captures, and show that tensor-based methods outperform existing matrix-based methods at this task. Experimentally, we show that all of our word embeddings either outperform or are competitive with state-of-the-art baselines commonly used today on a variety of recent datasets. Suggested applications of tensor factorization-based word embeddings are given, and all source code and pre-trained vectors are publicly available online.
[ 1, 0, 0, 1, 0, 0 ]
Title: Lower bounds for the index of compact constant mean curvature surfaces in $\mathbb R^{3}$ and $\mathbb S^{3}$, Abstract: Let $M$ be a compact constant mean curvature surface either in $\mathbb{S}^3$ or $\mathbb{R}^3$. In this paper we prove that the stability index of $M$ is bounded below by a linear function of the genus. As a by product we obtain a comparison theorem between the spectrum of the Jacobi operator of $M$ and those of Hodge Laplacian of $1$-forms on $M$.
[ 0, 0, 1, 0, 0, 0 ]
Title: On the maximum principle for a time-fractional diffusion equation, Abstract: In this paper, we discuss the maximum principle for a time-fractional diffusion equation $$ \partial_t^\alpha u(x,t) = \sum_{i,j=1}^n \partial_i(a_{ij}(x)\partial_j u(x,t)) + c(x)u(x,t) + F(x,t),\ t>0,\ x \in \Omega \subset {\mathbb R}^n$$ with the Caputo time-derivative of the order $\alpha \in (0,1)$ in the case of the homogeneous Dirichlet boundary condition. Compared to the already published results, our findings have two important special features. First, we derive a maximum principle for a suitably defined weak solution in the fractional Sobolev spaces, not for the strong solution. Second, for the non-negative source functions $F = F(x,t)$ we prove the non-negativity of the weak solution to the problem under consideration without any restrictions on the sign of the coefficient $c=c(x)$ by the derivative of order zero in the spatial differential operator. Moreover, we prove the monotonicity of the solution with respect to the coefficient $c=c(x)$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Hydrophobic Ice Confined between Graphene and MoS2, Abstract: The structure and nature of water confined between hydrophobic molybdenum disulfide (MoS2) and graphene (Gr) are investigated at room temperature by means of atomic force microscopy. We find the formation of two-dimensional (2D) crystalline ice layers. In contrast to the hexagonal ice 'bilayers' of bulk ice, these 2D crystalline ice phases consist of two planar hexagonal layers. Additional water condensation leads to either lateral expansion of the ice layers or to the formation of three-dimensional water droplets on top or at the edges of the two-layer ice, indicating that water does not wet these planar ice films. The results presented here are in line with a recent theory suggesting that water confined between hydrophobic walls forms 2D crystalline two-layer ice with a nontetrahedral geometry and intrahydrogen bonding. The lack of dangling bonds on either surface of the ice film gives rise to a hydrophobic character. The unusual geometry of these ice films is of great potential importance in biological systems with water in direct contact with hydrophobic surfaces.
[ 0, 1, 0, 0, 0, 0 ]
Title: Changes in the flagellar bundling time account for variations in swimming behavior of flagellated bacteria in viscous media, Abstract: Although the motility of the flagellated bacteria, Escherichia coli, has been widely studied, the effect of viscosity on swimming speed remains controversial. The swimming mode of wild-type E.coli is often idealized as a "run-and- tumble" sequence in which periods of swimming at a constant speed are randomly interrupted by a sudden change of direction at a very low speed. Using a tracking microscope, we follow cells for extended periods of time in Newtonian liquids of varying viscosity, and find that the swimming behavior of a single cell can exhibit a variety of behaviors including run-and-tumble and "slow-random-walk" in which the cells move at relatively low speed. Although the characteristic swimming speed varies between individuals and in different polymer solutions, we find that the skewness of the speed distribution is solely a function of viscosity and can be used, in concert with the measured average swimming speed, to determine the effective running speed of each cell. We hypothesize that differences in the swimming behavior observed in solutions of different viscosity are due to changes in the flagellar bundling time, which increases as the viscosity rises, due to the lower rotation rate of the flagellar motor. A numerical simulation and the use of Resistive Force theory provide support for this hypothesis.
[ 0, 1, 0, 0, 0, 0 ]
Title: Exploring Heritability of Functional Brain Networks with Inexact Graph Matching, Abstract: Data-driven brain parcellations aim to provide a more accurate representation of an individual's functional connectivity, since they are able to capture individual variability that arises due to development or disease. This renders comparisons between the emerging brain connectivity networks more challenging, since correspondences between their elements are not preserved. Unveiling these correspondences is of major importance to keep track of local functional connectivity changes. We propose a novel method based on graph edit distance for the comparison of brain graphs directly in their domain, that can accurately reflect similarities between individual networks while providing the network element correspondences. This method is validated on a dataset of 116 twin subjects provided by the Human Connectome Project.
[ 1, 0, 0, 0, 0, 0 ]
Title: Bianchi type-II universe with wet dark fluid in General Theory of Relativity, Abstract: In this paper, dark energy models of the universe filled with wet dark fluid are constructed in the framework of LRS Bianchi type-II space-time in General Theory of Relativity. A new equation of state modeled on the equation of state $p$=$\gamma(\rho - \rho_*)$, which can describe a liquid including water, is used. The exact solutions of Einstein's field equations are obtained in quadrature form and the models corresponding to the cases $\gamma = 0$ and $\gamma = 1$ are discussed in detail.
[ 0, 1, 0, 0, 0, 0 ]
Title: Solutions of generic bilinear master equations for a quantum oscillator -- positive and factorized conditions on stationary states, Abstract: We obtain the solutions of the generic bilinear master equation for a quantum oscillator with constant coefficients in the Gaussian form. The well-behavedness and positive semidefiniteness of the stationary states could be characterized by a three-dimensional Minkowski vector. By requiring the stationary states to satisfy a factorized condition, we obtain a generic class of master equations that includes the well-known ones and their generalizations, some of which are completely positive. A further subset of the master equations with the Gibbs states as stationary states is also obtained. For master equations with not completely positive generators, an analysis on the stationary states suggests conditions on the coefficients of the master equations that generate positive evolution for a given initial state.
[ 0, 1, 0, 0, 0, 0 ]
Title: Semantic Instance Segmentation with a Discriminative Loss Function, Abstract: Semantic instance segmentation remains a challenging task. In this work we propose to tackle the problem with a discriminative loss function, operating at the pixel level, that encourages a convolutional network to produce a representation of the image that can easily be clustered into instances with a simple post-processing step. The loss function encourages the network to map each pixel to a point in feature space so that pixels belonging to the same instance lie close together while different instances are separated by a wide margin. Our approach of combining an off-the-shelf network with a principled loss function inspired by a metric learning objective is conceptually simple and distinct from recent efforts in instance segmentation. In contrast to previous works, our method does not rely on object proposals or recurrent mechanisms. A key contribution of our work is to demonstrate that such a simple setup without bells and whistles is effective and can perform on par with more complex methods. Moreover, we show that it does not suffer from some of the limitations of the popular detect-and-segment approaches. We achieve competitive performance on the Cityscapes and CVPPP leaf segmentation benchmarks.
[ 1, 0, 0, 0, 0, 0 ]
Title: A Cost-Sensitive Deep Belief Network for Imbalanced Classification, Abstract: Imbalanced data with a skewed class distribution are common in many real-world applications. Deep Belief Network (DBN) is a machine learning technique that is effective in classification tasks. However, conventional DBN does not work well for imbalanced data classification because it assumes equal costs for each class. To deal with this problem, cost-sensitive approaches assign different misclassification costs for different classes without disrupting the true data sample distributions. However, due to lack of prior knowledge, the misclassification costs are usually unknown and hard to choose in practice. Moreover, it has not been well studied as to how cost-sensitive learning could improve DBN performance on imbalanced data problems. This paper proposes an evolutionary cost-sensitive deep belief network (ECS-DBN) for imbalanced classification. ECS-DBN uses adaptive differential evolution to optimize the misclassification costs based on training data, that presents an effective approach to incorporating the evaluation measure (i.e. G-mean) into the objective function. We first optimize the misclassification costs, then apply them to deep belief network. Adaptive differential evolution optimization is implemented as the optimization algorithm that automatically updates its corresponding parameters without the need of prior domain knowledge. The experiments have shown that the proposed approach consistently outperforms the state-of-the-art on both benchmark datasets and real-world dataset for fault diagnosis in tool condition monitoring.
[ 0, 0, 0, 1, 0, 0 ]
Title: Ordered p-median problems with neighborhoods, Abstract: In this paper, we introduce a new variant of the $p$-median facility location problem in which it is assumed that the exact location of the potential facilities is unknown. Instead, each of the facilities must be located in a region around their initially assigned location (the neighborhood). In this problem, two main decisions have to be made simultaneously: the determination of the potential facilities that must be open to serve the demands of the customers and the location of the open facilities in their neighborhoods, at global minimum cost. We present several mixed integer non-linear programming formulations for a wide family of objective functions which are common in Location Analysis: ordered median functions. We also develop two math-heuristic approaches for solving the problem. We report the results of extensive computational experiments.
[ 0, 0, 1, 0, 0, 0 ]
Title: Multi-Block Interleaved Codes for Local and Global Read Access, Abstract: We define multi-block interleaved codes as codes that allow reading information from either a small sub-block or from a larger full block. The former offers faster access, while the latter provides better reliability. We specify the correction capability of the sub-block code through its gap $t$ from optimal minimum distance, and look to have full-block minimum distance that grows with the parameter $t$. We construct two families of such codes when the number of sub-blocks is $3$. The codes match the distance properties of known integrated-interleaving codes, but with the added feature of mapping the same number of information symbols to each sub-block. As such, they are the first codes that provide read access in multiple size granularities and correction capabilities.
[ 1, 0, 0, 0, 0, 0 ]
Title: Ranking Recovery from Limited Comparisons using Low-Rank Matrix Completion, Abstract: This paper proposes a new method for solving the well-known rank aggregation problem from pairwise comparisons using the method of low-rank matrix completion. The partial and noisy data of pairwise comparisons is transformed into a matrix form. We then use tools from matrix completion, which has served as a major component in the low-rank completion solution of the Netflix challenge, to construct the preference of the different objects. In our approach, the data of multiple comparisons is used to create an estimate of the probability of object i to win (or be chosen) over object j, where only a partial set of comparisons between N objects is known. The data is then transformed into a matrix form for which the noiseless solution has a known rank of one. An alternating minimization algorithm, in which the target matrix takes a bilinear form, is then used in combination with maximum likelihood estimation for both factors. The reconstructed matrix is used to obtain the true underlying preference intensity. This work demonstrates the improvement of our proposed algorithm over the current state-of-the-art in both simulated scenarios and real data.
[ 1, 0, 0, 1, 0, 0 ]
Title: Distributions of a particle's position and their asymptotics in the $q$-deformed totally asymmetric zero range process with site dependent jumping rates, Abstract: In this paper we study the probability distribution of the position of a tagged particle in the $q$-deformed Totally Asymmetric Zero Range Process ($q$-TAZRP) with site dependent jumping rates. For a finite particle system, it is derived from the transition probability previously obtained by Wang and Waugh. We also provide the probability distribution formula for a tagged particle in the $q$-TAZRP with the so-called step initial condition in which infinitely many particles occupy one single site and all other sites are unoccupied. For the $q$-TAZRP with step initial condition, we provide a Fredholm determinant representation for the probability distribution function of the position of a tagged particle, and moreover we obtain the limiting distribution function as the time goes to infinity. Our asymptotic result for $q$-TAZRP with step initial condition is comparable to the limiting distribution function obtained by Tracy and Widom for the $k$-th leftmost particle in the asymmetric simple exclusion process with step initial condition (Theorem 2 in Commun. Math. Phys. 290, 129--154 (2009)).
[ 0, 1, 1, 0, 0, 0 ]
Title: Electrically controllable spin filtering based on superconducting helical states, Abstract: The magnetoelectric effects in the surface states of the 3D TI are extremely strong due to the full spin-momentum locking. Here the microscopic theory of S/3D TI bilayer structures in terms of quasiclassical Green's functions is developed. On the basis of the developed formalism it is shown that the DOS in the S/TI bilayer manifests giant magnetoelectric behavior and, as a result, S/3D TI heterostructures can work as non-magnetic fully electrically controllable spin filters. It is shown that due to the full spin-momentum locking the amplitudes of the odd-frequency singlet and triplet components of the consensate wave function are equal. The same is valid for the even frequency singlet and triplet components. We unveil the connection between the odd-frequency pairing in S/3D TI heterostructures and magnetoelectric effects in the DOS.
[ 0, 1, 0, 0, 0, 0 ]
Title: An Online Development Environment for Answer Set Programming, Abstract: Recent progress in logic programming (e.g., the development of the Answer Set Programming paradigm) has made it possible to teach it to general undergraduate and even high school students. Given the limited exposure of these students to computer science, the complexity of downloading, installing and using tools for writing logic programs could be a major barrier for logic programming to reach a much wider audience. We developed an online answer set programming environment with a self contained file system and a simple interface, allowing users to write logic programs and perform several tasks over the programs.
[ 1, 0, 0, 0, 0, 0 ]
Title: Experience-based Optimization: A Coevolutionary Approach, Abstract: This paper studies improving solvers based on their past solving experiences, and focuses on improving solvers by offline training. Specifically, the key issues of offline training methods are discussed, and research belonging to this category but from different areas are reviewed in a unified framework. Existing training methods generally adopt a two-stage strategy in which selecting the training instances and training instances are treated in two independent phases. This paper proposes a new training method, dubbed LiangYi, which addresses these two issues simultaneously. LiangYi includes a training module for a population-based solver and an instance sampling module for updating the training instances. The idea behind LiangYi is to promote the population-based solver by training it (with the training module) to improve its performance on those instances (discovered by the sampling module) on which it performs badly, while keeping the good performances obtained by it on previous instances. An instantiation of LiangYi on the Travelling Salesman Problem is also proposed. Empirical results on a huge testing set containing 10000 instances showed LiangYi could train solvers that perform significantly better than the solvers trained by other state-of-the-art training method. Moreover, empirical investigation of the behaviours of LiangYi confirmed it was able to continuously improve the solver through training.
[ 1, 0, 0, 0, 0, 0 ]
Title: Leveraging the Crowd to Detect and Reduce the Spread of Fake News and Misinformation, Abstract: Online social networking sites are experimenting with the following crowd-powered procedure to reduce the spread of fake news and misinformation: whenever a user is exposed to a story through her feed, she can flag the story as misinformation and, if the story receives enough flags, it is sent to a trusted third party for fact checking. If this party identifies the story as misinformation, it is marked as disputed. However, given the uncertain number of exposures, the high cost of fact checking, and the trade-off between flags and exposures, the above mentioned procedure requires careful reasoning and smart algorithms which, to the best of our knowledge, do not exist to date. In this paper, we first introduce a flexible representation of the above procedure using the framework of marked temporal point processes. Then, we develop a scalable online algorithm, Curb, to select which stories to send for fact checking and when to do so to efficiently reduce the spread of misinformation with provable guarantees. In doing so, we need to solve a novel stochastic optimal control problem for stochastic differential equations with jumps, which is of independent interest. Experiments on two real-world datasets gathered from Twitter and Weibo show that our algorithm may be able to effectively reduce the spread of fake news and misinformation.
[ 1, 0, 0, 1, 0, 0 ]
Title: Data-efficient Auto-tuning with Bayesian Optimization: An Industrial Control Study, Abstract: Bayesian optimization is proposed for automatic learning of optimal controller parameters from experimental data. A probabilistic description (a Gaussian process) is used to model the unknown function from controller parameters to a user-defined cost. The probabilistic model is updated with data, which is obtained by testing a set of parameters on the physical system and evaluating the cost. In order to learn fast, the Bayesian optimization algorithm selects the next parameters to evaluate in a systematic way, for example, by maximizing information gain about the optimum. The algorithm thus iteratively finds the globally optimal parameters with only few experiments. Taking throttle valve control as a representative industrial control example, the proposed auto-tuning method is shown to outperform manual calibration: it consistently achieves better performance with a low number of experiments. The proposed auto-tuning framework is flexible and can handle different control structures and objectives.
[ 1, 0, 0, 0, 0, 0 ]
Title: The Global Optimization Geometry of Low-Rank Matrix Optimization, Abstract: This paper considers general rank-constrained optimization problems that minimize a general objective function $f(X)$ over the set of rectangular $n\times m$ matrices that have rank at most $r$. To tackle the rank constraint and also to reduce the computational burden, we factorize $X$ into $UV^T$ where $U$ and $V$ are $n\times r$ and $m\times r$ matrices, respectively, and then optimize over the small matrices $U$ and $V$. We characterize the global optimization geometry of the nonconvex factored problem and show that the corresponding objective function satisfies the robust strict saddle property as long as the original objective function $f$ satisfies restricted strong convexity and smoothness properties, ensuring global convergence of many local search algorithms (such as noisy gradient descent) in polynomial time for solving the factored problem. We also provide a comprehensive analysis for the optimization geometry of a matrix factorization problem where we aim to find $n\times r$ and $m\times r$ matrices $U$ and $V$ such that $UV^T$ approximates a given matrix $X^\star$. Aside from the robust strict saddle property, we show that the objective function of the matrix factorization problem has no spurious local minima and obeys the strict saddle property not only for the exact-parameterization case where $rank(X^\star) = r$, but also for the over-parameterization case where $rank(X^\star) < r$ and the under-parameterization case where $rank(X^\star) > r$. These geometric properties imply that a number of iterative optimization algorithms (such as gradient descent) converge to a global solution with random initialization.
[ 1, 0, 1, 0, 0, 0 ]
Title: Which bridge estimator is optimal for variable selection?, Abstract: We study the problem of variable selection for linear models under the high-dimensional asymptotic setting, where the number of observations $n$ grows at the same rate as the number of predictors $p$. We consider two-stage variable selection techniques (TVS) in which the first stage uses bridge estimators to obtain an estimate of the regression coefficients, and the second stage simply thresholds this estimate to select the "important" predictors. The asymptotic false discovery proportion (AFDP) and true positive proportion (ATPP) of these TVS are evaluated. We prove that for a fixed ATTP, in order to obtain a smaller AFDP, one should pick a bridge estimator with smaller asymptotic mean square error in the first stage of TVS. Based on such principled discovery, we present a sharp comparison of different TVS, via an in-depth investigation of the estimation properties of bridge estimators. Rather than "order-wise" error bounds with loose constants, our analysis focuses on precise error characterization. Various interesting signal-to-noise ratio and sparsity settings are studied. Our results offer new and thorough insights into high-dimensional variable selection. For instance, we prove that a TVS with Ridge in its first stage outperforms TVS with other bridge estimators in large noise settings; two-stage LASSO becomes inferior when the signal is rare and weak. As a by-product, we show our proposed two-stage methods outperform some standard variable selection techniques, such as LASSO and Sure Independence Screening, under certain conditions.
[ 0, 0, 1, 1, 0, 0 ]
Title: Network Systems and String Stability, Abstract: Network systems and their control are highly important and appear in a variety of applications, including vehicle platooning and formation con- trol. Especially vehicle platoons are highly investigated and an interesting problem that arises in this area is string stability, which broadly spoken means that a input signal amplifies unbounded as it travels through the vehicle string. However, various definitions are commonly used. In this paper, we aim to formalise the notion of string stability and illustrate the importance of those distinctions on simulation examples. A second goal is to generalise the found definitions for general network systems.
[ 1, 0, 0, 0, 0, 0 ]
Title: Exploring the Function Space of Deep-Learning Machines, Abstract: The function space of deep-learning machines is investigated by studying growth in the entropy of functions of a given error with respect to a reference function, realized by a deep-learning machine. Using physics-inspired methods we study both sparsely and densely-connected architectures to discover a layer-wise convergence of candidate functions, marked by a corresponding reduction in entropy when approaching the reference function, gain insight into the importance of having a large number of layers, and observe phase transitions as the error increases.
[ 1, 1, 0, 0, 0, 0 ]
Title: On Geometry of Manifolds with Some Tensor Structures and Metrics of Norden Type, Abstract: The object of study in the present dissertation are some topics in differential geometry of smooth manifolds with additional tensor structures and metrics of Norden type. There are considered four cases depending on the dimension of the manifold: 2n, 2n + 1, 4n and 4n + 3. The studied tensor structures, which are counterparts in the different related dimensions, are the almost complex/contact/hypercomplex structure and the almost contact 3-structure. The considered metric on the 2n-dimensional case is the Norden metric, and the metrics in the other three cases are generated by it. The purpose of the dissertation is to carry out the following: 1. Further investigations of almost complex manifolds with Norden metric including studying of natural connections with conditions for their torsion and invariant tensors under the twin interchange of Norden metrics. 2. Further investigations of almost contact manifolds with B-metric including studying of natural connections with conditions for their torsion and associated Schouten-van Kampen connections as well as a classification of affine connections. 3. Introducing and studying of Sasaki-like almost contact complex Riemannian manifolds. 4. Further investigations of almost hypercomplex manifolds with Hermitian-Norden metrics including studying of integrable structures of the considered type on 4-dimensional Lie algebra and tangent bundles with the complete lift of the base metric; introducing of associated Nijenhuis tensors in relation with natural connections having totally skew-symmetric torsion as well as quaternionic Kähler manifolds with Hermitian-Norden metrics. 5. Introducing and studying of manifolds with almost contact 3-structures and metrics of Hermitian-Norden type and, in particular, associated Nijenhuis tensors and their relationship with natural connections having totally skew-symmetric torsion.
[ 0, 0, 1, 0, 0, 0 ]
Title: Backward-emitted sub-Doppler fluorescence from an optically thick atomic vapor, Abstract: Literature mentions only incidentally a sub-Doppler contribution in the excitation spectrum of the backward fluorescence of a dense vapor. This contribution is here investigated on Cs vapor, both on the first resonance line (894 nm) and on the weaker second resonance line (459 nm). We show that in a strongly absorbing medium, the quenching of excited atoms moving towards a window irradiated under near normal incidence reduces the fluorescence on the red side of the excitation spectrum. Atoms moving slowly towards the window produce a sub- Doppler velocity-selective contribution, whose visibility is here improved by applying a frequency-modulation technique. This sub-Doppler feature, induced by a surface quenching combined with a short absorption length for the incident irradiation, exhibits close analogies with the narrow spectra appearing with thin vapor cells. We also show that a normal incidence irradiation is essential for the sub-Doppler feature to be observed, while it should be independent of the detection geometry
[ 0, 1, 0, 0, 0, 0 ]
Title: Dust Density Distribution and Imaging Analysis of Different Ice Lines in Protoplanetary Disks, Abstract: Recent high angular resolution observations of protoplanetary disks at different wavelengths have revealed several kinds of structures, including multiple bright and dark rings. Embedded planets are the most used explanation for such structures, but there are alternative models capable of shaping the dust in rings as it has been observed. We assume a disk around a Herbig star and investigate the effect that ice lines have on the dust evolution, following the growth, fragmentation, and dynamics of multiple dust size particles, covering from 1 $\mu$m to 2 m sized objects. We use simplified prescriptions of the fragmentation velocity threshold, which is assumed to change radially at the location of one, two, or three ice lines. We assume changes at the radial location of main volatiles, specifically H$_2$O, CO$_2$, and NH$_3$. Radiative transfer calculations are done using the resulting dust density distributions in order to compare with current multiwavelength observations. We find that the structures in the dust density profiles and radial intensities at different wavelengths strongly depend on the disk viscosity. A clear gap of emission can be formed between ice lines and be surrounded by ring-like structures, in particular between the H$_2$O and CO$_2$ (or CO). The gaps are expected to be shallower and narrower at millimeter emission than at near-infrared, opposite to model predictions of particle trapping. In our models, the total gas surface density is not expected to show strong variations, in contrast to other gap-forming scenarios such as embedded giant planets or radial variations of the disk viscosity.
[ 0, 1, 0, 0, 0, 0 ]
Title: nIFTy Cosmology: the clustering consistency of galaxy formation models, Abstract: We present a clustering comparison of 12 galaxy formation models (including Semi-Analytic Models (SAMs) and Halo Occupation Distribution (HOD) models) all run on halo catalogues and merger trees extracted from a single {\Lambda}CDM N-body simulation. We compare the results of the measurements of the mean halo occupation numbers, the radial distribution of galaxies in haloes and the 2-Point Correlation Functions (2PCF). We also study the implications of the different treatments of orphan (galaxies not assigned to any dark matter subhalo) and non-orphan galaxies in these measurements. Our main result is that the galaxy formation models generally agree in their clustering predictions but they disagree significantly between HOD and SAMs for the orphan satellites. Although there is a very good agreement between the models on the 2PCF of central galaxies, the scatter between the models when orphan satellites are included can be larger than a factor of 2 for scales smaller than 1 Mpc/h. We also show that galaxy formation models that do not include orphan satellite galaxies have a significantly lower 2PCF on small scales, consistent with previous studies. Finally, we show that the 2PCF of orphan satellites is remarkably different between SAMs and HOD models. Orphan satellites in SAMs present a higher clustering than in HOD models because they tend to occupy more massive haloes. We conclude that orphan satellites have an important role on galaxy clustering and they are the main cause of the differences in the clustering between HOD models and SAMs.
[ 0, 1, 0, 0, 0, 0 ]
Title: The near-critical Gibbs measure of the branching random walk, Abstract: Consider the supercritical branching random walk on the real line in the boundary case and the associated Gibbs measure $\nu_{n,\beta}$ on the $n^\text{th}$ generation, which is also the polymer measure on a disordered tree with inverse temperature $\beta$. The convergence of the partition function $W_{n,\beta}$, after rescaling, towards a nontrivial limit has been proved by A\"{\i}dékon and Shi in the critical case $\beta = 1$ and by Madaule when $\beta >1$. We study here the near-critical case, where $\beta_n \to 1$, and prove the convergence of $W_{n,\beta_n}$, after rescaling, towards a constant multiple of the limit of the derivative martingale. Moreover, trajectories of particles chosen according to the Gibbs measure $\nu_{n,\beta}$ have been studied by Madaule in the critical case, with convergence towards the Brownian meander, and by Chen, Madaule and Mallein in the strong disorder regime, with convergence towards the normalized Brownian excursion. We prove here the convergence for trajectories of particles chosen according to the near-critical Gibbs measure and display continuous families of processes from the meander to the excursion or to the Brownian motion.
[ 0, 0, 1, 0, 0, 0 ]
Title: Graded Lie algebras and regular prehomogeneous vector spaces with one-dimensional scalar multiplication, Abstract: The aim of this paper is to study relations between regular reductive PVs with one-dimensional scalar multiplication and the structure of graded Lie algebras. We will show that the regularity of such PVs is described by an $\mathfrak{sl}_2$-triplet of a graded Lie algebra.
[ 0, 0, 1, 0, 0, 0 ]
Title: EPIC 210894022b - A short period super-Earth transiting a metal poor, evolved old star, Abstract: The star EPIC 210894022 has been identified from a light curve acquired through the K2 space mission as possibly orbited by a transiting planet. Our aim is to confirm the planetary nature of the object and derive its fundamental parameters. We combine the K2 photometry with reconnaissance spectroscopy and radial velocity (RV) measurements obtained using three separate telescope and spectrograph combinations. The spectroscopic synthesis package SME has been used to derive the stellar photospheric parameters that were used as input to various stellar evolutionary tracks in order to derive the parameters of the system. The planetary transit was also validated to occur on the assumed host star through adaptive imaging and statistical analysis. The star is found to be located in the background of the Hyades cluster at a distance at least 4 times further away from Earth than the cluster itself. The spectrum and the space velocities of EPIC 210894022 strongly suggest it to be a member of the thick disk population. We find that the star is a metal poor ([Fe/H]=-0.53+/-0.05 dex) and alpha-rich somewhat evolved solar-like object of spectral type G3 with Teff=5730+/-50 K, logg=4.15+/-0.1 (cgs), radius of 1.3+/-0.1 R_Sun, and mass of 0.88+/-0.02 M_Sun. The RV detection together with the imaging confirms with a high level of significance that the transit signature is caused by a super-Earth orbiting the star EPIC 210894022. We measure a mass of 8.6+/-3.9 M_Earth and a radius of 1.9+/-0.2 R_Earth. A second more massive object with a period longer than about 120 days is indicated by a long term linear acceleration. With an age of > 10 Gyrs this system is one of the oldest where planets is hitherto detected. Further studies of this planetary system is important since it contains information about the planetary formation process during a very early epoch of the history of our Galaxy.
[ 0, 1, 0, 0, 0, 0 ]
Title: Efficient computation of pi by the Newton - Raphson iteration and a two-term Machin-like formula, Abstract: In our recent publication we have proposed a new methodology for determination of the two-term Machin-like formula for pi with small arguments of the arctangent function of kind $$ \frac{\pi }{4} = {2^{k - 1}}\arctan \left( {\frac{1}{\beta_1}} \right) + \arctan \left( {\frac{1}{\beta_2}} \right), $$ where $k$ and ${\beta_1}$ are some integers and ${\beta_2}$ is a rational number, dependent upon ${\beta_1}$ and $k$. Although ${1/\left|\beta_2\right|}$ may be significantly smaller than ${1/\beta_1}$, the large numbers in the numerator and denominator of $\beta_2$ decelerate the computation. In this work we show how this problem can be effectively resolved by the Newton--Raphson iteration method.
[ 0, 0, 1, 0, 0, 0 ]
Title: Quantum Simulation and Spectroscopy of Entanglement Hamiltonians, Abstract: Entanglement is central to our understanding of many-body quantum matter. In particular, the entanglement spectrum, as eigenvalues of the reduced density matrix of a subsystem, provides a unique footprint of properties of strongly correlated quantum matter from detection of topological order to characterisation of quantum critical systems. However, direct experimental measurement of the entanglement spectrum has so far remained elusive due to lack of direct experimental probes. Here we show that the entanglement spectrum of the ground state of a broad class of Hamiltonians becomes directly accessible as quantum simulation and spectroscopy of an entanglement Hamil- tonian, building on the Bisognano-Wichmann (BW) theorem of axiomatic quantum field theory. Remarkably, this theorem gives an explicit physical construction of the entanglement Hamiltonian, identified as Hamiltonian of the many-body system of interest with spatially varying couplings. Building on this, we propose an immediate, scalable recipe for implementation of the entanglement Hamiltonian, and measurement of the corresponding entanglement spectrum as spectroscopy of the Bisognano-Wichmann Hamiltonian with synthetic quantum systems, including atoms in optical lat- tices and trapped ions. We illustrate and benchmark this scenario on a variety of models, spanning phenomena as diverse as conformal field theories, topological order, and quantum phase transitions.
[ 0, 1, 0, 0, 0, 0 ]
Title: Persistence-like distance on Tamarkin's category and symplectic displacement energy, Abstract: We introduce a persistence-like pseudo-distance on Tamarkin's category and prove that the distance between an object and its Hamiltonian deformation is at most the Hofer norm of the Hamiltonian function. Using the distance, we show a quantitative version of Tamarkin's non-displaceability theorem, which gives a lower bound of the displacement energy of compact subsets in a cotangent bundle.
[ 0, 0, 1, 0, 0, 0 ]
Title: On $q$-commutative power and Laurent series rings at roots of unity, Abstract: We continue the first and second authors' study of $q$-commutative power series rings $R=k_q[[x_1,\ldots,x_n]]$ and Laurent series rings $L=k_q[[x^{\pm 1}_1,\ldots,x^{\pm 1}_n]]$, specializing to the case in which the commutation parameters $q_{ij}$ are all roots of unity. In this setting, $R$ is a PI algebra, and we can apply results of De Concini, Kac, and Procesi to show that $L$ is an Azumaya algebra whose degree can be inferred from the $q_{ij}$. Our main result establishes an exact criterion (dependent on the $q_{ij}$) for determining when the centers of $L$ and $R$ are commutative Laurent series and commutative power series rings, respectively. In the event this criterion is satisfied, it follows that $L$ is a unique factorization ring in the sense of Chatters and Jordan, and it further follows, by results of Dumas, Launois, Lenagan, and Rigal, that $R$ is a unique factorization ring. We thus produce new examples of complete, local, noetherian, noncommutative, unique factorization rings (that are PI domains).
[ 0, 0, 1, 0, 0, 0 ]
Title: On risk-sensitive piecewise deterministic Markov decision processes, Abstract: We consider a piecewise deterministic Markov decision process, where the expected exponential utility of total (nonnegative) cost is to be minimized. The cost rate, transition rate and post-jump distributions are under control. The state space is Borel, and the transition and cost rates are locally integrable along the drift. Under natural conditions, we establish the optimality equation, justify the value iteration algorithm, and show the existence of a deterministic stationary optimal policy. Applied to special cases, the obtained results already significantly improve some existing results in the literature on finite horizon and infinite horizon discounted risk-sensitive continuous-time Markov decision processes.
[ 0, 0, 1, 0, 0, 0 ]
Title: Spectral Clustering Methods for Multiplex Networks, Abstract: Multiplex networks offer an important tool for the study of complex systems and extending techniques originally designed for single--layer networks is an important area of study. One of the most important methods for analyzing networks is clustering the nodes into communities that represent common connectivity patterns. In this paper we extend spectral clustering to multiplex structures and discuss some of the difficulties that arise in attempting to define a natural generalization. In order to analyze our approach, we describe three simple, synthetic multiplex networks and compare the performance of different multiplex models. Our results suggest that a dynamically motivated model is more successful than a structurally motivated model in discovering the appropriate communities.
[ 1, 1, 0, 0, 0, 0 ]
Title: Logical properties of random graphs from small addable classes, Abstract: We establish zero-one laws and convergence laws for monadic second-order logic (MSO) (and, a fortiori, first-order logic) on a number of interesting graph classes. In particular, we show that MSO obeys a zero-one law on the class of connected planar graphs, the class of connected graphs of tree-width at most k and the class of connected graphs excluding the k-clique as a minor. In each of these cases, dropping the connectivity requirement leads to a class where the zero-one law fails but a convergence law for MSO still holds.
[ 1, 0, 1, 0, 0, 0 ]
Title: Unidirectional zero reflection as gauged parity-time symmetry, Abstract: We introduce here the concept of establishing Parity-time symmetry through a gauge transformation involving a shift of the mirror plane for the Parity operation. The corresponding unitary transformation on the system's constitutive matrix allows us to generate and explore a family of equivalent Parity-time symmetric systems. We further derive that unidirectional zero reflection can always be associated with a gauged PT-symmetry and demonstrate this experimentally using a microstrip transmission-line with magnetoelectric coupling. This study allows us to use bianisotropy as a simple route to realize and explore exceptional point behaviour of PT-symmetric or generally non-Hermitian systems.
[ 0, 1, 0, 0, 0, 0 ]
Title: Gaussian curvature directs the distribution of spontaneous curvature on bilayer membrane necks, Abstract: Formation of membrane necks is crucial for fission and fusion in lipid bilayers. In this work, we seek to answer the following fundamental question: what is the relationship between protein-induced spontaneous mean curvature and the Gaussian curvature at a membrane neck? Using an augmented Helfrich model for lipid bilayers to include membrane-protein interaction, we solve the shape equation on catenoids to find the field of spontaneous curvature that satisfies mechanical equilibrium of membrane necks. In this case, the shape equation reduces to a variable coefficient Helmholtz equation for spontaneous curvature, where the source term is proportional to the Gaussian curvature. We show how this latter quantity is responsible for non-uniform distribution of spontaneous curvature in minimal surfaces. We then explore the energetics of catenoids with different spontaneous curvature boundary conditions and geometric asymmetries to show how heterogeneities in spontaneous curvature distribution can couple with Gaussian curvature to result in membrane necks of different geometries.
[ 0, 1, 0, 0, 0, 0 ]
Title: Divergence, Entropy, Information: An Opinionated Introduction to Information Theory, Abstract: Information theory is a mathematical theory of learning with deep connections with topics as diverse as artificial intelligence, statistical physics, and biological evolution. Many primers on the topic paint a broad picture with relatively little mathematical sophistication, while many others develop specific application areas in detail. In contrast, these informal notes aim to outline some elements of the information-theoretic "way of thinking," by cutting a rapid and interesting path through some of the theory's foundational concepts and theorems. We take the Kullback-Leibler divergence as our foundational concept, and then proceed to develop the entropy and mutual information. We discuss some of the main foundational results, including the Chernoff bounds as a characterization of the divergence; Gibbs' Theorem; and the Data Processing Inequality. A recurring theme is that the definitions of information theory support natural theorems that sound "obvious" when translated into English. More pithily, "information theory makes common sense precise." Since the focus of the notes is not primarily on technical details, proofs are provided only where the relevant techniques are illustrative of broader themes. Otherwise, proofs and intriguing tangents are referenced in liberally-sprinkled footnotes. The notes close with a highly nonexhaustive list of references to resources and other perspectives on the field.
[ 0, 0, 1, 1, 0, 0 ]
Title: Stochastic Constraint Programming as Reinforcement Learning, Abstract: Stochastic Constraint Programming (SCP) is an extension of Constraint Programming (CP) used for modelling and solving problems involving constraints and uncertainty. SCP inherits excellent modelling abilities and filtering algorithms from CP, but so far it has not been applied to large problems. Reinforcement Learning (RL) extends Dynamic Programming to large stochastic problems, but is problem-specific and has no generic solvers. We propose a hybrid combining the scalability of RL with the modelling and constraint filtering methods of CP. We implement a prototype in a CP system and demonstrate its usefulness on SCP problems.
[ 1, 0, 0, 0, 0, 0 ]
Title: Improving Sparsity in Kernel Adaptive Filters Using a Unit-Norm Dictionary, Abstract: Kernel adaptive filters, a class of adaptive nonlinear time-series models, are known by their ability to learn expressive autoregressive patterns from sequential data. However, for trivial monotonic signals, they struggle to perform accurate predictions and at the same time keep computational complexity within desired boundaries. This is because new observations are incorporated to the dictionary when they are far from what the algorithm has seen in the past. We propose a novel approach to kernel adaptive filtering that compares new observations against dictionary samples in terms of their unit-norm (normalised) versions, meaning that new observations that look like previous samples but have a different magnitude are not added to the dictionary. We achieve this by proposing the unit-norm Gaussian kernel and define a sparsification criterion for this novel kernel. This new methodology is validated on two real-world datasets against standard KAF in terms of the normalised mean square error and the dictionary size.
[ 1, 0, 0, 1, 0, 0 ]
Title: An analyst's take on the BPHZ theorem, Abstract: We provide a self-contained formulation of the BPHZ theorem in the Euclidean context, which yields a systematic procedure to "renormalise" otherwise divergent integrals appearing in generalised convolutions of functions with a singularity of prescribed order at their origin. We hope that the formulation given in this article will appeal to an analytically minded audience and that it will help to clarify to what extent such renormalisations are arbitrary (or not). In particular, we do not assume any background whatsoever in quantum field theory and we stay away from any discussion of the physical context in which such problems typically arise.
[ 0, 0, 1, 0, 0, 0 ]
Title: A variational-geometric approach for the optimal control of nonholonomic systems, Abstract: Necessary conditions for existence of normal extremals in optimal control of systems subject to nonholonomic constraints are derived as solutions of a constrained second order variational problems. In this work, a geometric interpretation of the derivation is studied from the theory of Lie algebroids. We employ such a framework to describe the problem into a unifying formalism for normal extremals in optimal control of nonholonomic systems and including situations that have not been considered before in the literature from this perspective. We show that necessary conditions for existence of extremals in the optimal control problem can be also determined by a Hamiltonian system on the cotangent bundle of a skew-symmetric algebroid.
[ 0, 0, 1, 0, 0, 0 ]
Title: Development of a compact ExB microchannel plate detector for beam imaging, Abstract: A beam imaging detector was developed by coupling a multi-strip anode with delay line readout to an E$\times$B microchannel plate (MCP) detector. This detector is capable of measuring the incident position of the beam particles in one-dimension. To assess the spatial resolution, the detector was illuminated by an $\alpha$-source with an intervening mask that consists of a series of precisely-machined slits. The measured spatial resolution was 520$\mu$m FWHM, which was improved to 413$\mu$m FWHM by performing an FFT of the signals, rejecting spurious signals on the delay line, and requiring a minimum signal amplitude. This measured spatial resolution of 413$\mu$m FWHM corresponds to an intrinsic resolution of 334$\mu$m FWHM when the effect of the finite slit width is de-convoluted. To understand the measured resolution, the performance of the detector is simulated with the ion-trajectory code SIMION.
[ 0, 1, 0, 0, 0, 0 ]
Title: StackSeq2Seq: Dual Encoder Seq2Seq Recurrent Networks, Abstract: A widely studied non-deterministic polynomial time (NP) hard problem lies in finding a route between the two nodes of a graph. Often meta-heuristics algorithms such as $A^{*}$ are employed on graphs with a large number of nodes. Here, we propose a deep recurrent neural network architecture based on the Sequence-2-Sequence (Seq2Seq) model, widely used, for instance in text translation. Particularly, we illustrate that utilising a context vector that has been learned from two different recurrent networks enables increased accuracies in learning the shortest route of a graph. Additionally, we show that one can boost the performance of the Seq2Seq network by smoothing the loss function using a homotopy continuation of the decoder's loss function.
[ 1, 0, 0, 1, 0, 0 ]
Title: Supervised Deep Hashing for Hierarchical Labeled Data, Abstract: Recently, hashing methods have been widely used in large-scale image retrieval. However, most existing hashing methods did not consider the hierarchical relation of labels, which means that they ignored the rich information stored in the hierarchy. Moreover, most of previous works treat each bit in a hash code equally, which does not meet the scenario of hierarchical labeled data. In this paper, we propose a novel deep hashing method, called supervised hierarchical deep hashing (SHDH), to perform hash code learning for hierarchical labeled data. Specifically, we define a novel similarity formula for hierarchical labeled data by weighting each layer, and design a deep convolutional neural network to obtain a hash code for each data point. Extensive experiments on several real-world public datasets show that the proposed method outperforms the state-of-the-art baselines in the image retrieval task.
[ 1, 0, 0, 0, 0, 0 ]
Title: The growth rates of automaton groups generated by reset automata, Abstract: We give sufficient conditions for when groups generated by automata in a class $\mathcal{C}$ of transducers, which contains the class of reset automata transducers, have infinite order. As a consequence we also demonstrate that if a group generated by an automata in $\mathcal{C}$ is infinite, then it contains a free semigroup of rank at least 2. This gives a new proof, in the context of groups generated by automaton in $\mathcal{C}$, of a result of Chou showing that finitely generated elementary amenable groups either have polynomial growth or contain a free semigroup of rank at least 2. We also study what we call the `core growth rate' of elements of $\mathcal{C}$. This turns out to be equivalent to the growth rate of certain initial transducers. We give examples of transducers with exponential core growth rate, and conjecture that all infinite order transducers in the class $\mathcal{C}$ have exponential core growth rate.
[ 0, 0, 1, 0, 0, 0 ]
Title: Combinatorial identities and Chern numbers of complex flag manifolds, Abstract: We present in this article a family of new combinatorial identities via purely differential/complex geometry methods, which include as a speical case a unified and explicit formula for Chern numbers of all complex flag manifolds. Our strategy is to construct concrete circle actions with isolated fixed points on these manifolds and explicitly determine their weights. Then applying Bott's residue formula to these models yields the desired results.
[ 0, 0, 1, 0, 0, 0 ]
Title: Impact splash chondrule formation during planetesimal recycling, Abstract: Chondrules are the dominant bulk silicate constituent of chondritic meteorites and originate from highly energetic, local processes during the first million years after the birth of the Sun. So far, an astrophysically consistent chondrule formation scenario, explaining major chemical, isotopic and textural features, remains elusive. Here, we examine the prospect of forming chondrules from planetesimal collisions. We show that intensely melted bodies with interior magma oceans became rapidly chemically equilibrated and physically differentiated. Therefore, collisional interactions among such bodies would have resulted in chondrule-like but basaltic spherules, which are not observed in the meteoritic record. This inconsistency with the expected dynamical interactions hints at an incomplete understanding of the planetary growth regime during the protoplanetary disk phase. To resolve this conundrum, we examine how the observed chemical and isotopic features of chondrules constrain the dynamical environment of accreting chondrite parent bodies by interpreting the meteoritic record as an impact-generated proxy of planetesimals that underwent repeated collision and reaccretion cycles. Using a coupled evolution-collision model we demonstrate that the vast majority of collisional debris feeding the asteroid main belt must be derived from planetesimals which were partially molten at maximum. Therefore, the precursors of chondrite parent bodies either formed primarily small, from sub-canonical aluminum-26 reservoirs, or collisional destruction mechanisms were efficient enough to shatter planetesimals before they reached the magma ocean phase. Finally, we outline the window in parameter space for which chondrule formation from planetesimal collisions can be reconciled with the meteoritic record and how our results can be used to further constrain early solar system dynamics.
[ 0, 1, 0, 0, 0, 0 ]
Title: Z-checker: A Framework for Assessing Lossy Compression of Scientific Data, Abstract: Because of vast volume of data being produced by today's scientific simulations and experiments, lossy data compressor allowing user-controlled loss of accuracy during the compression is a relevant solution for significantly reducing the data size. However, lossy compressor developers and users are missing a tool to explore the features of scientific datasets and understand the data alteration after compression in a systematic and reliable way. To address this gap, we have designed and implemented a generic framework called Z-checker. On the one hand, Z-checker combines a battery of data analysis components for data compression. On the other hand, Z-checker is implemented as an open-source community tool to which users and developers can contribute and add new analysis components based on their additional analysis demands. In this paper, we present a survey of existing lossy compressors. Then we describe the design framework of Z-checker, in which we integrated evaluation metrics proposed in prior work as well as other analysis tools. Specifically, for lossy compressor developers, Z-checker can be used to characterize critical properties of any dataset to improve compression strategies. For lossy compression users, Z-checker can detect the compression quality, provide various global distortion analysis comparing the original data with the decompressed data and statistical analysis of the compression error. Z-checker can perform the analysis with either coarse granularity or fine granularity, such that the users and developers can select the best-fit, adaptive compressors for different parts of the dataset. Z-checker features a visualization interface displaying all analysis results in addition to some basic views of the datasets such as time series. To the best of our knowledge, Z-checker is the first tool designed to assess lossy compression comprehensively for scientific datasets.
[ 1, 1, 0, 0, 0, 0 ]
Title: Combining the Transcorrelated method with Full Configuration Interaction Quantum Monte Carlo: application to the homogeneous electron gas, Abstract: We suggest an efficient method to resolve electronic cusps in electronic structure calculations by using an effective transcorrelated Hamiltonian. This effective Hamiltonian takes a simple form for plane wave bases, containing up to two-body operators only, and its use incurs almost no additional computational overhead compared to that of the original Hamiltonian. We apply this method in combination with the full configuration interaction quantum Monte Carlo (FCIQMC) method to the homogeneous electron gas. As a projection technique, the non-Hermitian nature of the transcorrelated Hamiltonian does not cause complications or numerical difficulties for FCIQMC. The rate of convergence of the total energy to the complete basis set limit is improved from ${\cal O}(M^{-1})$ to ${\cal O}\left({M^{-5/3}}\right)$, where $M$ is the total number of orbital basis functions.
[ 0, 1, 0, 0, 0, 0 ]
Title: Dynamics of quantum information in many-body localized systems, Abstract: We characterize the information dynamics of strongly disordered systems using a combination of analytics, exact diagonalization, and matrix product operator simulations. More specifically, we study the spreading of quantum information in three different scenarios: thermalizing, Anderson localized, and many-body localized. We qualitatively distinguish these cases by quantifying the amount of remnant information in a local region. The nature of the dynamics is further explored by computing the propagation of mutual information with respect to varying partitions. Finally, we demonstrate that classical simulability, as captured by the magnitude of MPO truncation errors, exhibits enhanced fluctuations near the localization transition, suggesting the possibility of its use as a diagnostic of the critical point.
[ 0, 1, 0, 0, 0, 0 ]
Title: Correlations in suspensions confined between viscoelastic surfaces: Noncontact microrheology, Abstract: We study theoretically the velocity cross-correlations of a viscous fluid confined in a slit between two viscoelastic media. We analyze the effect of these correlations on the motions of particles suspended in the fluid. The compliance of the confining boundaries gives rise to a long-ranged pair correlation, decaying only as $1/r$ with the interparticle distance $r$. We show how this long-ranged effect may be used to extract the viscoelastic properties of the confining media without embedding tracer particles in them. We discuss the remarkable robustness of such a potential technique with respect to details of the confinement, and its expected statistical advantages over standard two-point microrheology.
[ 0, 1, 0, 0, 0, 0 ]
Title: The Cosmic V-Web, Abstract: The network of filaments with embedded clusters surrounding voids seen in maps derived from redshift surveys and reproduced in simulations has been referred to as the cosmic web. A complementary description is provided by considering the shear in the velocity field of galaxies. The eigenvalues of the shear provide information on whether a region is collapsing in three dimensions, the condition for a knot, expanding in three-dimensions, the condition for a void, or in the intermediate condition of a filament or sheet. The structures that are quantitatively defined by the eigenvalues can be approximated by iso-contours that provide a visual representation of the cosmic velocity (V) web. The current application is based on radial peculiar velocities from the Cosmicflows-2 collection of distances. The three-dimensional velocity field is constructed using the Wiener filter methodology in the linear approximation. Eigenvalues of the velocity shear are calculated at each point on a grid. Here, knots and filaments are visualized across a local domain of diameter ~0.1c.
[ 0, 1, 0, 0, 0, 0 ]
Title: Irreducibility and r-th root finding over finite fields, Abstract: Constructing $r$-th nonresidue over a finite field is a fundamental computational problem. A related problem is to construct an irreducible polynomial of degree $r^e$ (where $r$ is a prime) over a given finite field $\mathbb{F}_q$ of characteristic $p$ (equivalently, constructing the bigger field $\mathbb{F}_{q^{r^e}}$). Both these problems have famous randomized algorithms but the derandomization is an open question. We give some new connections between these two problems and their variants. In 1897, Stickelberger proved that if a polynomial has an odd number of even degree factors, then its discriminant is a quadratic nonresidue in the field. We give an extension of Stickelberger's Lemma; we construct $r$-th nonresidues from a polynomial $f$ for which there is a $d$, such that, $r|d$ and $r\nmid\,$#(irreducible factor of $f(x)$ of degree $d$). Our theorem has the following interesting consequences: (1) we can construct $\mathbb{F}_{q^m}$ in deterministic poly(deg($f$),$m\log q$)-time if $m$ is an $r$-power and $f$ is known; (2) we can find $r$-th roots in $\mathbb{F}_{p^m}$ in deterministic poly($m\log p$)-time if $r$ is constant and $r|\gcd(m,p-1)$. We also discuss a conjecture significantly weaker than the Generalized Riemann hypothesis to get a deterministic poly-time algorithm for $r$-th root finding.
[ 1, 0, 1, 0, 0, 0 ]
Title: Competing magnetic interactions in spin-1/2 square lattice: hidden order in Sr$_2$VO$_4$, Abstract: With decreasing temperature Sr$_2$VO$_4$ undergoes two structural phase transitions, tetragonal-to-orthorhombic-to-tetragonal, without long-range magnetic order. Recent experiments suggest, that only at very low temperature Sr$_{2}$VO$_{4}$ might enter some, yet unknown, phase with long-range magnetic order, but without orthorhombic distortion. By combining relativistic density functional theory with an extended spin-1/2 compass-Heisenberg model we find an antiferromagnetic single-stripe ground state with highly competing exchange interactions, involving a non negligible inter-layer coupling, which places the system at the crossover between between the XY and Heisenberg picture. Most strikingly, we find a strong two-site "spin-compass" exchange anisotropy which is relieved by the orthorhombic distortion induced by the spin stripe order. Based on these results we discuss the origin of the hidden order phase and the possible formation of a spin-liquid at low temperatures.
[ 0, 1, 0, 0, 0, 0 ]
Title: Adaptive Matching for Expert Systems with Uncertain Task Types, Abstract: A matching in a two-sided market often incurs an externality: a matched resource may become unavailable to the other side of the market, at least for a while. This is especially an issue in online platforms involving human experts as the expert resources are often scarce. The efficient utilization of experts in these platforms is made challenging by the fact that the information available about the parties involved is usually limited. To address this challenge, we develop a model of a task-expert matching system where a task is matched to an expert using not only the prior information about the task but also the feedback obtained from the past matches. In our model the tasks arrive online while the experts are fixed and constrained by a finite service capacity. For this model, we characterize the maximum task resolution throughput a platform can achieve. We show that the natural greedy approaches where each expert is assigned a task most suitable to her skill is suboptimal, as it does not internalize the above externality. We develop a throughput optimal backpressure algorithm which does so by accounting for the `congestion' among different task types. Finally, we validate our model and confirm our theoretical findings with data-driven simulations via logs of Math.StackExchange, a StackOverflow forum dedicated to mathematics.
[ 1, 0, 0, 1, 0, 0 ]
Title: The t-t'-J model in one dimension using extremely correlated Fermi liquid theory and time dependent density matrix renormalization group, Abstract: We study the one dimensional t-t'-J model for generic couplings using two complementary theories, the extremely correlated Fermi liquid theory and time-dependent density matrix renormalization group over a broad energy scale. The two methods provide a unique insight into the strong momentum dependence of the self-energy of this prototypical non-Fermi liquid, described at low energies as a Tomonaga-Luttinger liquid. We also demonstrate its intimate relationship to spin-charge separation, i.e. the splitting of Landau quasiparticles of higher dimensions into two constituents, driven by strong quantum fluctuations inherent in one dimension. The momentum distribution function, the spectral function, and the excitation dispersion of these two methods also compare well.
[ 0, 1, 0, 0, 0, 0 ]
Title: CAOS: Concurrent-Access Obfuscated Store, Abstract: This paper proposes Concurrent-Access Obfuscated Store (CAOS), a construction for remote data storage that provides access-pattern obfuscation in a honest-but-curious adversarial model, while allowing for low bandwidth overhead and client storage. Compared to the state of the art, the main advantage of CAOS is that it supports concurrent access without a proxy, for multiple read-only clients and a single read-write client. Concurrent access is achieved by letting clients maintain independent maps that describe how the data is stored. These maps might diverge from client to client, but it is guaranteed that no client will ever lose track of current data. We achieve efficiency and concurrency at the expense of perfect obfuscation: in CAOS the extent to which access patterns are hidden is determined by the resources allocated to its built-in obfuscation mechanism. To assess this trade-off we provide both a security and a performance analysis of our protocol instance. We additionally provide a proof-of-concept implementation.
[ 1, 0, 0, 0, 0, 0 ]
Title: Safe Execution of Concurrent Programs by Enforcement of Scheduling Constraints, Abstract: Automated software verification of concurrent programs is challenging because of exponentially growing state spaces. Verification techniques such as model checking need to explore a large number of possible executions that are possible under a non-deterministic scheduler. State space reduction techniques such as partial order reduction simplify the verification problem, however, the reduced state space may still be exponentially large and intractable. This paper discusses Iteratively Relaxed Scheduling, a framework that uses scheduling constraints in order to simplify the verification problem and enable automated verification of programs which could not be handled with fully non-deterministic scheduling. Program executions are safe as long as the same scheduling constraints are enforced under which the program has been verified, e.g., by instrumenting a program with additional synchronization. As strict enforcement of scheduling constraints may induce a high execution time overhead, we present optimizations over a naive solution that reduce this overhead. Our evaluation of a prototype implementation on well-known benchmark programs shows the effect of scheduling constraints on the execution time overhead and how this overhead can be reduced by relaxing and choosing constraints.
[ 1, 0, 0, 0, 0, 0 ]
Title: A Short-Term Voltage Stability Index and case studies, Abstract: The short-term voltage stability (SVS) problem in large-scale receiving-end power systems is serious due to the increasing load demand, the increasing use of electronically controlled loads and so on. Some serious blackouts are considered to be related to short-term voltage instability. In China, the East China Grid (ECG) is especially vulnerable to short-term voltage instability due the its increasing dependence on power injection from external grids through HVDC links. However, the SVS criteria used in practice are all qualitative and the SVS indices proposed in previous researches are mostly based on the qualitative SVS criteria. So a Short-Term Voltage Stability Index (SVSI), which is continuous, quantitative and multi-dimensional, is proposed in this paper. The SVSI consists of three components, which reflects the transient voltage restoration, the transient voltage oscillation and the steady-state recovery ability of the voltage signal respectively after the contingency has been cleared. The theoretical backgrounds and affected factors of these three components of SVSI are analyzed, together with some feasible applications. The verification of the validity of SVSI are tested through more 10,000 cases based on ECG. Additionally, a simple case of selecting candidate locations to install dynamic var using SVSI is presented to show its feasibility to solve the optimization problem for dynamic var allocation.
[ 1, 0, 1, 0, 0, 0 ]
Title: The Structure of the Inverse System of Level $K$-Algebras, Abstract: Macaulay's inverse system is an effective method to construct Artinian K-algebras with additional properties like, Gorenstein, level, more generally with any socle type. Recently, Elias and Rossi gave the structure of the inverse system of $d$-dimensional Gorenstein K-algebras for any $d>0$. In this paper we extend their result by establishing a one-to-one correspondence between $d$-dimensional level K-algebras and certain submodules of the divided power ring. We give several examples to illustrate our result.
[ 0, 0, 1, 0, 0, 0 ]
Title: Exploring the Role of Intrinsic Nodal Activation on the Spread of Influence in Complex Networks, Abstract: In many complex networked systems, such as online social networks, activity originates at certain nodes and subsequently spreads on the network through influence. In this work, we consider the problem of modeling the spread of influence and the identification of influential entities in a complex network when nodal activation can happen via two different mechanisms. The first mechanism of activation stems from factors that are intrinsic to the node. The second mechanism comes from the influence of connected neighbors. After introducing the model, we provide an algorithm to mine for the influential nodes in such a scenario by modifying the well-known influence maximization algorithm to work with our model that incorporates both forms of activation. Our model can be considered as a variation of the independent cascade diffusion model. We provide small motivating examples to facilitate an intuitive understanding of the effect of including the intrinsic activation mechanism. We sketch a proof of the submodularity of the influence function under the new formulation and demonstrate the same on larger graphs. Based on the model, we explain how influential content creators can drive engagement on social media platforms. Using additional experiments on a Twitter dataset, we then show how the formulation can be applied to real-world social media datasets. Finally, we derive a centrality metric that takes into account, both the mechanisms of activation and provides for an accurate, computationally efficient, alternate approach to the problem of identifying influencers under intrinsic activation.
[ 1, 0, 0, 0, 0, 0 ]
Title: The border support rank of two-by-two matrix multiplication is seven, Abstract: We show that the border support rank of the tensor corresponding to two-by-two matrix multiplication is seven over the complex numbers. We do this by constructing two polynomials that vanish on all complex tensors with format four-by-four-by-four and border rank at most six, but that do not vanish simultaneously on any tensor with the same support as the two-by-two matrix multiplication tensor. This extends the work of Hauenstein, Ikenmeyer, and Landsberg. We also give two proofs that the support rank of the two-by-two matrix multiplication tensor is seven over any field: one proof using a result of De Groote saying that the decomposition of this tensor is unique up to sandwiching, and another proof via the substitution method. These results answer a question asked by Cohn and Umans. Studying the border support rank of the matrix multiplication tensor is relevant for the design of matrix multiplication algorithms, because upper bounds on the border support rank of the matrix multiplication tensor lead to upper bounds on the computational complexity of matrix multiplication, via a construction of Cohn and Umans. Moreover, support rank has applications in quantum communication complexity.
[ 1, 0, 1, 0, 0, 0 ]
Title: TPA: Fast, Scalable, and Accurate Method for Approximate Random Walk with Restart on Billion Scale Graphs, Abstract: Given a large graph, how can we determine similarity between nodes in a fast and accurate way? Random walk with restart (RWR) is a popular measure for this purpose and has been exploited in numerous data mining applications including ranking, anomaly detection, link prediction, and community detection. However, previous methods for computing exact RWR require prohibitive storage sizes and computational costs, and alternative methods which avoid such costs by computing approximate RWR have limited accuracy. In this paper, we propose TPA, a fast, scalable, and highly accurate method for computing approximate RWR on large graphs. TPA exploits two important properties in RWR: 1) nodes close to a seed node are likely to be revisited in following steps due to block-wise structure of many real-world graphs, and 2) RWR scores of nodes which reside far from the seed node are proportional to their PageRank scores. Based on these two properties, TPA divides approximate RWR problem into two subproblems called neighbor approximation and stranger approximation. In the neighbor approximation, TPA estimates RWR scores of nodes close to the seed based on scores of few early steps from the seed. In the stranger approximation, TPA estimates RWR scores for nodes far from the seed using their PageRank. The stranger and neighbor approximations are conducted in the preprocessing phase and the online phase, respectively. Through extensive experiments, we show that TPA requires up to 3.5x less time with up to 40x less memory space than other state-of-the-art methods for the preprocessing phase. In the online phase, TPA computes approximate RWR up to 30x faster than existing methods while maintaining high accuracy.
[ 1, 0, 0, 0, 0, 0 ]
Title: Novel polystyrene-based nanocomposites by phosphorene dispersion, Abstract: Polystyrene-based phosphorene nanocomposites were prepared by a solvent blending procedure allowing the embedding of black phosphorus (BP) nanoflakes in the polymer matrix. Raman spectroscopy, X Ray Diffraction and TEM microscopy were employed to characterize the structural and the morphological characteristics of the achieved hybrids, with the aim to evaluate the dispersion level of black phosphorus layers. TGA, DSC analysis as well as thermal oxidation and photo-degradation techniques were employed to investigate the thermal- and the photo-stability of the samples. The collected results evidenced better thermal and photostability of both polymer matrix and dispersed layered phosphorus, suggesting really interesting polymer-nanofiller synergic effects ascribable to the presence and the good dispersion of the 2D-nanomaterial.
[ 0, 1, 0, 0, 0, 0 ]
Title: Analysis and Applications of Delay Differential Equations in Biology and Medicine, Abstract: The main purpose of this paper is to provide a summary of the fundamental methods for analyzing delay differential equations arising in biology and medicine. These methods are employed to illustrate the effects of time delay on the behavior of solutions, which include destabilization of steady states, periodic and oscillatory solutions, bifurcations, and stability switches. The biological interpretations of delay effects are briefly discussed.
[ 0, 0, 1, 0, 0, 0 ]
Title: Planar Drawings of Fixed-Mobile Bigraphs, Abstract: A fixed-mobile bigraph G is a bipartite graph such that the vertices of one partition set are given with fixed positions in the plane and the mobile vertices of the other part, together with the edges, must be added to the drawing. We assume that G is planar and study the problem of finding, for a given k >= 0, a planar poly-line drawing of G with at most k bends per edge. In the most general case, we show NP-hardness. For k=0 and under additional constraints on the positions of the fixed or mobile vertices, we either prove that the problem is polynomial-time solvable or prove that it belongs to NP. Finally, we present a polynomial-time testing algorithm for a certain type of "layered" 1-bend drawings.
[ 1, 0, 0, 0, 0, 0 ]
Title: Recovery of Architecture Module Views using an Optimized Algorithm Based on Design Structure Matrices, Abstract: Design structure matrices (DSMs) are useful to represent high-level system structure, modeling interactions between design entities. DSMs are used for many visualization and abstraction activities. In this work, we propose the use of an existing DSM clustering algorithm to recover software architecture module views. To make it suitable to this domain, optimization has proved necessary. It was achieved through performance analysis and parameter tuning on the original algorithm. Results show that DSM clustering can be an alternative to other clustering algorithms.
[ 1, 0, 0, 0, 0, 0 ]
Title: Implementation and Analysis of QUIC for MQTT, Abstract: Transport and security protocols are essential to ensure reliable and secure communication between two parties. For IoT applications, these protocols must be lightweight, since IoT devices are usually resource constrained. Unfortunately, the existing transport and security protocols -- namely TCP/TLS and UDP/DTLS -- fall short in terms of connection overhead, latency, and connection migration when used in IoT applications. In this paper, after studying the root causes of these shortcomings, we show how utilizing QUIC in IoT scenarios results in a higher performance. Based on these observations, and given the popularity of MQTT as an IoT application layer protocol, we integrate MQTT with QUIC. By presenting the main APIs and functions developed, we explain how connection establishment and message exchange functionalities work. We evaluate the performance of MQTTw/QUIC versus MQTTw/TCP using wired, wireless, and long-distance testbeds. Our results show that MQTTw/QUIC reduces connection overhead in terms of the number of packets exchanged with the broker by up to 56%. In addition, by eliminating half-open connections, MQTTw/QUIC reduces processor and memory usage by up to 83% and 50%, respectively. Furthermore, by removing the head-of-line blocking problem, delivery latency is reduced by up to 55%. We also show that the throughput drops experienced by MQTTw/QUIC when a connection migration happens is considerably lower than that of MQTTw/TCP.
[ 1, 0, 0, 0, 0, 0 ]