text
stringlengths
57
2.88k
labels
sequencelengths
6
6
Title: Impact of Continuous Integration on Code Reviews, Abstract: Peer code review and continuous integration often interleave with each other in the modern software quality management. Although several studies investigate how non-technical factors (e.g., reviewer workload), developer participation and even patch size affect the code review process, the impact of continuous integration on code reviews is not yet properly understood. In this paper, we report an exploratory study using 578K automated build entries where we investigate the impact of automated builds on the code reviews. Our investigation suggests that successfully passed builds are more likely to encourage new code review participation in a pull request. Frequently built projects are found to be maintaining a steady level of reviewing activities over the years, which was quite missing from the rarely built projects. Experiments with 26,516 automated build entries reported that our proposed model can identify 64% of the builds that triggered new code reviews later.
[ 1, 0, 0, 0, 0, 0 ]
Title: New Algorithms for Unordered Tree Inclusion, Abstract: The tree inclusion problem is, given two node-labeled trees $P$ and $T$ (the "pattern tree" and the "text tree"), to locate every minimal subtree in $T$ (if any) that can be obtained by applying a sequence of node insertion operations to $P$. The ordered tree inclusion problem is known to be solvable in polynomial time while the unordered tree inclusion problem is NP-hard. The currently fastest algorithm for the latter is from 1995 and runs in $O(poly(m,n) \cdot 2^{2d}) = O^{\ast}(4^{d})$ time, where $m$ and $n$ are the sizes of the pattern and text trees, respectively, and $d$ is the degree of the pattern tree. Here, we develop a new algorithm that improves the exponent $2d$ to $d$ by considering a particular type of ancestor-descendant relationships and applying dynamic programming, thus reducing the time complexity to $O^{\ast}(2^{d})$. We then study restricted variants of the unordered tree inclusion problem where the number of occurrences of different node labels and/or the input trees' heights are bounded and show that although the problem remains NP-hard in many such cases, if the leaves of $P$ are distinctly labeled and each label occurs at most $c$ times in $T$ then it can be solved in polynomial time for $c = 2$ and in $O^{\ast}(1.8^d)$ time for $c = 3$.
[ 1, 0, 0, 0, 0, 0 ]
Title: Learning to Price with Reference Effects, Abstract: As a firm varies the price of a product, consumers exhibit reference effects, making purchase decisions based not only on the prevailing price but also the product's price history. We consider the problem of learning such behavioral patterns as a monopolist releases, markets, and prices products. This context calls for pricing decisions that intelligently trade off between maximizing revenue generated by a current product and probing to gain information for future benefit. Due to dependence on price history, realized demand can reflect delayed consequences of earlier pricing decisions. As such, inference entails attribution of outcomes to prior decisions and effective exploration requires planning price sequences that yield informative future outcomes. Despite the considerable complexity of this problem, we offer a tractable systematic approach. In particular, we frame the problem as one of reinforcement learning and leverage Thompson sampling. We also establish a regret bound that provides graceful guarantees on how performance improves as data is gathered and how this depends on the complexity of the demand model. We illustrate merits of the approach through simulations.
[ 1, 0, 0, 0, 0, 0 ]
Title: Inverse Reinforce Learning with Nonparametric Behavior Clustering, Abstract: Inverse Reinforcement Learning (IRL) is the task of learning a single reward function given a Markov Decision Process (MDP) without defining the reward function, and a set of demonstrations generated by humans/experts. However, in practice, it may be unreasonable to assume that human behaviors can be explained by one reward function since they may be inherently inconsistent. Also, demonstrations may be collected from various users and aggregated to infer and predict user's behaviors. In this paper, we introduce the Non-parametric Behavior Clustering IRL algorithm to simultaneously cluster demonstrations and learn multiple reward functions from demonstrations that may be generated from more than one behaviors. Our method is iterative: It alternates between clustering demonstrations into different behavior clusters and inverse learning the reward functions until convergence. It is built upon the Expectation-Maximization formulation and non-parametric clustering in the IRL setting. Further, to improve the computation efficiency, we remove the need of completely solving multiple IRL problems for multiple clusters during the iteration steps and introduce a resampling technique to avoid generating too many unlikely clusters. We demonstrate the convergence and efficiency of the proposed method through learning multiple driver behaviors from demonstrations generated from a grid-world environment and continuous trajectories collected from autonomous robot cars using the Gazebo robot simulator.
[ 1, 0, 0, 0, 0, 0 ]
Title: $z^\circ$-ideals in intermediate rings of ordered field valued continuous functions, Abstract: A proper ideal $I$ in a commutative ring with unity is called a $z^\circ$-ideal if for each $a$ in $I$, the intersection of all minimal prime ideals in $R$ which contain $a$ is contained in $I$. For any totally ordered field $F$ and a completely $F$-regular topological space $X$, let $C(X,F)$ be the ring of all $F$-valued continuous functions on $X$ and $B(X,F)$ the aggregate of all those functions which are bounded over $X$. An explicit formula for all the $z^\circ$-ideals in $A(X,F)$ in terms of ideals of closed sets in $X$ is given. It turns out that an intermediate ring $A(X,F)\neq C(X,F)$ is never regular in the sense of Von-Neumann. This property further characterizes $C(X,F)$ amongst the intermediate rings within the class of $P_F$-spaces $X$. It is also realized that $X$ is an almost $P_F$-space if and only if each maximal ideal in $C(X,F)$ is $z^\circ$-ideal. Incidentally this property also characterizes $C(X,F)$ amongst the intermediate rings within the family of almost $P_F$-spaces.
[ 0, 0, 1, 0, 0, 0 ]
Title: Mosquito Detection with Neural Networks: The Buzz of Deep Learning, Abstract: Many real-world time-series analysis problems are characterised by scarce data. Solutions typically rely on hand-crafted features extracted from the time or frequency domain allied with classification or regression engines which condition on this (often low-dimensional) feature vector. The huge advances enjoyed by many application domains in recent years have been fuelled by the use of deep learning architectures trained on large data sets. This paper presents an application of deep learning for acoustic event detection in a challenging, data-scarce, real-world problem. Our candidate challenge is to accurately detect the presence of a mosquito from its acoustic signature. We develop convolutional neural networks (CNNs) operating on wavelet transformations of audio recordings. Furthermore, we interrogate the network's predictive power by visualising statistics of network-excitatory samples. These visualisations offer a deep insight into the relative informativeness of components in the detection problem. We include comparisons with conventional classifiers, conditioned on both hand-tuned and generic features, to stress the strength of automatic deep feature learning. Detection is achieved with performance metrics significantly surpassing those of existing algorithmic methods, as well as marginally exceeding those attained by individual human experts.
[ 1, 0, 0, 1, 0, 0 ]
Title: The inseparability of sampling and time and its influence on attempts to unify the molecular and fossil records, Abstract: The two major approaches to studying macroevolution in deep time are the fossil record and reconstructed relationships among extant taxa from molecular data. Results based on one approach sometimes conflict with those based on the other, with inconsistencies often attributed to inherent flaws of one (or the other) data source. What is unquestionable is that both the molecular and fossil records are limited reflections of the same evolutionary history, and any contradiction between them represents a failure of our existing models to explain the patterns we observe. Fortunately, the different limitations of each record provide an opportunity to test or calibrate the other, and new methodological developments leverage both records simultaneously. However, we must reckon with the distinct relationships between sampling and time in the fossil record and molecular phylogenies. These differences impact our recognition of baselines, and the analytical incorporation of age estimate uncertainty. These differences in perspective also influence how different practitioners view the past and evolutionary time itself, bearing important implications for the generality of methodological advancements, and differences in the philosophical approach to macroevolutionary theory across fields.
[ 0, 0, 0, 0, 1, 0 ]
Title: The imprints of bars on the vertical stellar population gradients of galactic bulges, Abstract: This is the second paper of a series aimed to study the stellar kinematics and population properties of bulges in highly-inclined barred galaxies. In this work, we carry out a detailed analysis of the stellar age, metallicity and [Mg/Fe] of 28 highly-inclined ($i > 65^{o}$) disc galaxies, from S0 to S(B)c, observed with the SAURON integral-field spectrograph. The sample is divided into two clean samples of barred and unbarred galaxies, on the basis of the correlation between the stellar velocity and h$_3$ profiles, as well as the level of cylindrical rotation within the bulge region. We find that while the mean stellar age, metallicity and [Mg/Fe] in the bulges of barred and unbarred galaxies are not statistically distinct, the [Mg/Fe] gradients along the minor axis (away from the disc) of barred galaxies are significantly different than those without bars. For barred galaxies, stars that are vertically further away from the midplane are in general more [Mg/Fe]--enhanced and thus the vertical gradients in [Mg/Fe] for barred galaxies are mostly positive, while for unbarred bulges the [Mg/Fe] profiles are typically negative or flat. This result, together with the old populations observed in the barred sample, indicates that bars are long-lasting structures, and therefore are not easily destroyed. The marked [Mg/Fe] differences with the bulges of unbarred galaxies indicate that different formation/evolution scenarios are required to explain their build-up, and emphasizes the role of bars in redistributing stellar material in the bulge dominated regions.
[ 0, 1, 0, 0, 0, 0 ]
Title: A structure-preserving split finite element discretization of the split 1D wave equations, Abstract: We introduce a new finite element (FE) discretization framework applicable for covariant split equations. The introduction of additional differential forms (DF) that form pairs with the original ones permits the splitting of the equations into topological momentum and continuity equations and metric-dependent closure equations that apply the Hodge-star operator. Our discretization framework conserves this geometrical structure and provides for all DFs proper FE spaces such that the differential operators hold in strong form. We introduce lowest possible order discretizations of the split 1D wave equations, in which the discrete momentum and continuity equations follow by trivial projections onto piecewise constant FE spaces, omitting partial integrations. Approximating the Hodge-star by nontrivial Galerkin projections (GP), the two discrete metric equations follow by projections onto either the piecewise constant (GP0) or piecewise linear (GP1) space. Our framework gives us three schemes with significantly different behavior. The split scheme using twice GP1 is unstable and shares the dispersion relation with the P1-P1 FE scheme that approximates both variables by piecewise linear spaces (P1). The split schemes that apply a mixture of GP1 and GP0 share the dispersion relation with the stable P1-P0 FE scheme that applies piecewise linear and piecewise constant (P0) spaces. However, the split schemes exhibit second order convergence for both quantities of interest. For the split scheme applying twice GP0, we are not aware of a corresponding standard formulation to compare with. Though it does not provide a satisfactory approximation of the dispersion relation as short waves are propagated much too fast, the discovery of the new scheme illustrates the potential of our discretization framework as a toolbox to study and find FE schemes by new combinations of FE spaces.
[ 0, 0, 1, 0, 0, 0 ]
Title: Quantum quench dynamics, Abstract: Quench dynamics is an active area of study encompassing condensed matter physics and quantum information, with applications to cold-atomic gases and pump-probe spectroscopy of materials. Recent theoretical progress in studying quantum quenches is reviewed. Quenches in interacting one dimensional systems as well as systems in higher spatial dimensions are covered. The appearance of non-trivial steady states following a quench in exactly solvable models is discussed, and the stability of these states to perturbations is described. Proper conserving approximations needed to capture the onset of thermalization at long times are outlined. The appearance of universal scaling for quenches near critical points, and the role of the renormalization group in capturing the transient regime, are reviewed. Finally the effect of quenches near critical points on the dynamics of entanglement entropy and entanglement statistics is discussed. The extraction of critical exponents from the entanglement statistics is outlined.
[ 0, 1, 0, 0, 0, 0 ]
Title: R&D On Beam Injection and Bunching Schemes In The Fermilab Booster, Abstract: Fermilab is committed to upgrade its accelerator complex to support HEP experiments at the intensity frontier. The ongoing Proton Improvement Plan (PIP) enables us to reach 700 kW beam power on the NuMI neutrino targets. By the end of the next decade, the current 400 MeV normal conducting LINAC will be replaced by an 800 MeV superconducting LINAC (PIP-II) with an increased beam power >50% of the PIP design goal. Both in PIP and PIP-II era, the existing Booster is going to play a very significant role, at least for next two decades. In the meanwhile, we have recently developed an innovative beam injection and bunching scheme for the Booster called "early injection scheme" that continues to use the existing 400 MeV LINAC and implemented into operation. This scheme has the potential to increase the Booster beam intensity by >40% from the PIP design goal. Some benefits from the scheme have already been seen. In this paper, I will describe the basic principle of the scheme, results from recent beam experiments, our experience with the new scheme in operation, current status, issues and future plans. This scheme fits well with the current and future intensity upgrade programs at Fermilab.
[ 0, 1, 0, 0, 0, 0 ]
Title: Deep Structured Generative Models, Abstract: Deep generative models have shown promising results in generating realistic images, but it is still non-trivial to generate images with complicated structures. The main reason is that most of the current generative models fail to explore the structures in the images including spatial layout and semantic relations between objects. To address this issue, we propose a novel deep structured generative model which boosts generative adversarial networks (GANs) with the aid of structure information. In particular, the layout or structure of the scene is encoded by a stochastic and-or graph (sAOG), in which the terminal nodes represent single objects and edges represent relations between objects. With the sAOG appropriately harnessed, our model can successfully capture the intrinsic structure in the scenes and generate images of complicated scenes accordingly. Furthermore, a detection network is introduced to infer scene structures from a image. Experimental results demonstrate the effectiveness of our proposed method on both modeling the intrinsic structures, and generating realistic images.
[ 0, 0, 0, 1, 0, 0 ]
Title: Machine Learning Techniques for Stellar Light Curve Classification, Abstract: We apply machine learning techniques in an attempt to predict and classify stellar properties from noisy and sparse time series data. We preprocessed over 94 GB of Kepler light curves from MAST to classify according to ten distinct physical properties using both representation learning and feature engineering approaches. Studies using machine learning in the field have been primarily done on simulated data, making our study one of the first to use real light curve data for machine learning approaches. We tuned our data using previous work with simulated data as a template and achieved mixed results between the two approaches. Representation learning using a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) produced no successful predictions, but our work with feature engineering was successful for both classification and regression. In particular, we were able to achieve values for stellar density, stellar radius, and effective temperature with low error (~ 2 - 4%) and good accuracy (~ 75%) for classifying the number of transits for a given star. The results show promise for improvement for both approaches upon using larger datasets with a larger minority class. This work has the potential to provide a foundation for future tools and techniques to aid in the analysis of astrophysical data.
[ 0, 1, 0, 0, 0, 0 ]
Title: Frequency measurement of the clock transition of an indium ion sympathetically-cooled in a linear trap, Abstract: We report frequency measurement of the clock transition in an 115In+ ion sympathetically-cooled with Ca+ ions in a linear rf trap. The Ca+ ions are used as a probe of the external electromagnetic field and as the coolant for preparing the cold In+. The frequency is determined to be 1 267 402 452 901 049.9 (6.9) Hz by averaging 36 measurements using an optical frequency comb referenced to the frequency standards located in the same site.
[ 0, 1, 0, 0, 0, 0 ]
Title: Sentence-level dialects identification in the greater China region, Abstract: Identifying the different varieties of the same language is more challenging than unrelated languages identification. In this paper, we propose an approach to discriminate language varieties or dialects of Mandarin Chinese for the Mainland China, Hong Kong, Taiwan, Macao, Malaysia and Singapore, a.k.a., the Greater China Region (GCR). When applied to the dialects identification of the GCR, we find that the commonly used character-level or word-level uni-gram feature is not very efficient since there exist several specific problems such as the ambiguity and context-dependent characteristic of words in the dialects of the GCR. To overcome these challenges, we use not only the general features like character-level n-gram, but also many new word-level features, including PMI-based and word alignment-based features. A series of evaluation results on both the news and open-domain dataset from Wikipedia show the effectiveness of the proposed approach.
[ 1, 0, 0, 0, 0, 0 ]
Title: Measuring abstract reasoning in neural networks, Abstract: Whether neural networks can learn abstract reasoning or whether they merely rely on superficial statistics is a topic of recent debate. Here, we propose a dataset and challenge designed to probe abstract reasoning, inspired by a well-known human IQ test. To succeed at this challenge, models must cope with various generalisation `regimes' in which the training and test data differ in clearly-defined ways. We show that popular models such as ResNets perform poorly, even when the training and test sets differ only minimally, and we present a novel architecture, with a structure designed to encourage reasoning, that does significantly better. When we vary the way in which the test questions and training data differ, we find that our model is notably proficient at certain forms of generalisation, but notably weak at others. We further show that the model's ability to generalise improves markedly if it is trained to predict symbolic explanations for its answers. Altogether, we introduce and explore ways to both measure and induce stronger abstract reasoning in neural networks. Our freely-available dataset should motivate further progress in this direction.
[ 0, 0, 0, 1, 0, 0 ]
Title: On fractional powers of Bessel operators, Abstract: This paper was published in the special issue of the Journal of Inequalities and Special Functions dedicated to Professor Ivan Dimovski's contributions to different fields of mathematics: transmutation theory, special functions, integral transforms, function theory etc. In this paper we study fractional powers of the Bessel differential operator. The fractional powers are defined explicitly in the integral form without use of integral transforms in its definitions. Some general properties of the fractional powers of the Bessel differential operator are proved and some are listed. Among them are different variations of definitions, relations with the Mellin and Hankel transforms, group property, generalized Taylor formula with Bessel operators, evaluation of resolvent integral operator in terms of the Wright or generalized Mittag--Leffler functions. At the end, some topics are indicated for further study and possible generalizations. Also the aim of the paper is to attract attention and give references to not widely known results on fractional powers of the Bessel differential operator.
[ 0, 0, 1, 0, 0, 0 ]
Title: Small Telescope Exoplanet Transit Surveys: XO, Abstract: The XO project aims at detecting transiting exoplanets around bright stars from the ground using small telescopes. The original configuration of XO (McCullough et al. 2005) has been changed and extended as described here. The instrumental setup consists of three identical units located at different sites, each composed of two lenses equipped with CCD cameras mounted on the same mount. We observed two strips of the sky covering an area of 520 deg$^2$ for twice nine months. We build lightcurves for ~20,000 stars up to magnitude R~12.5 using a custom-made photometric data reduction pipeline. The photometric precision is around 1-2% for most stars, and the large quantity of data allows us to reach a millimagnitude precision when folding the lightcurves on timescales that are relevant to exoplanetary transits. We search for periodic signals and identify several hundreds of variable stars and a few tens of transiting planet candidates. Follow-up observations are underway to confirm or reject these candidates. We found two close-in gas giant planets so far, in line with the expected yield.
[ 0, 1, 0, 0, 0, 0 ]
Title: Divide-and-Conquer Reinforcement Learning, Abstract: Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into "slices", and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at this http URL
[ 1, 0, 0, 0, 0, 0 ]
Title: Whipping of electrified visco-capillary jets in airflows, Abstract: An electrified visco-capillary jet shows different dynamic behavior, such as cone forming, breakage into droplets, whipping and coiling, depending on the considered parameter regime. The whipping instability that is of fundamental importance for electrospinning has been approached by means of stability analysis in previous papers. In this work we alternatively propose a model framework in which the instability can be computed straightforwardly as the stable stationary solution of an asymptotic Cosserat rod description. For this purpose, we adopt a procedure by Ribe (Proc. Roy. Soc. Lond. A, 2004) describing the jet dynamics with respect to a frame rotating with the a priori unknown whipping frequency that itself becomes part of the solution. The rod model allows for stretching, bending and torsion, taking into account inertia, viscosity, surface tension, electric field and air drag. For the resulting parametric boundary value problem of ordinary differential equations we present a continuation-collocation method. On top of an implicit Runge-Kutta scheme of fifth order, our developed continuation procedure makes the efficient and robust simulation and navigation through a high-dimensional parameter space possible. Despite the simplicity of the employed electric force model the numerical results are convincing, the whipping effect is qualitatively well characterized.
[ 0, 1, 1, 0, 0, 0 ]
Title: On Optimizing Feedback Interval for Temporally Correlated MIMO Channels With Transmit Beamforming And Finite-Rate Feedback, Abstract: A receiver with perfect channel state information (CSI) in a point-to-point multiple-input multiple-output (MIMO) channel can compute the transmit beamforming vector that maximizes the transmission rate. For frequency-division duplex, a transmitter is not able to estimate CSI directly and has to obtain a quantized transmit beamforming vector from the receiver via a rate-limited feedback channel. We assume that time evolution of MIMO channels is modeled as a Gauss-Markov process parameterized by a temporal-correlation coefficient. Since feedback rate is usually low, we assume rank-one transmit beamforming or transmission with single data stream. For given feedback rate, we analyze the optimal feedback interval that maximizes the average received power of the systems with two transmit or two receive antennas. For other system sizes, the optimal feedback interval is approximated by maximizing the rate difference in a large system limit. Numerical results show that the large system approximation can predict the optimal interval for finite-size system quite accurately. Numerical results also show that quantizing transmit beamforming with the optimal feedback interval gives larger rate than the existing Kalman-filter scheme does by as much as 10% and than feeding back for every block does by 44% when the number of feedback bits is small.
[ 1, 0, 0, 0, 0, 0 ]
Title: RADNET: Radiologist Level Accuracy using Deep Learning for HEMORRHAGE detection in CT Scans, Abstract: We describe a deep learning approach for automated brain hemorrhage detection from computed tomography (CT) scans. Our model emulates the procedure followed by radiologists to analyse a 3D CT scan in real-world. Similar to radiologists, the model sifts through 2D cross-sectional slices while paying close attention to potential hemorrhagic regions. Further, the model utilizes 3D context from neighboring slices to improve predictions at each slice and subsequently, aggregates the slice-level predictions to provide diagnosis at CT level. We refer to our proposed approach as Recurrent Attention DenseNet (RADnet) as it employs original DenseNet architecture along with adding the components of attention for slice level predictions and recurrent neural network layer for incorporating 3D context. The real-world performance of RADnet has been benchmarked against independent analysis performed by three senior radiologists for 77 brain CTs. RADnet demonstrates 81.82% hemorrhage prediction accuracy at CT level that is comparable to radiologists. Further, RADnet achieves higher recall than two of the three radiologists, which is remarkable.
[ 0, 0, 0, 1, 0, 0 ]
Title: Median statistics estimates of Hubble and Newton's Constant, Abstract: Robustness of any statistics depends upon the number of assumptions it makes about the measured data. We point out the advantages of median statistics using toy numerical experiments and demonstrate its robustness, when the number of assumptions we can make about the data are limited. We then apply the median statistics technique to obtain estimates of two constants of nature, Hubble Constant ($H_0$) and Newton's Gravitational Constant($G$), both of which show significant differences between different measurements. For $H_0$, we update the analysis done by Chen and Ratra (2011) and Gott et al. (2001) using $576$ measurements. We find after grouping the different results according to their primary type of measurement, the median estimates are given by $H_0=72.5^{+2.5}_{-8}$ km/sec/Mpc with errors corresponding to 95% c.l. (2$\sigma$) and $G=6.674702^{+0.0014}_{-0.0009} \times 10^{-11} \mathrm{N m^{2}kg^{-2}}$ corresponding to 68% c.l. (1$\sigma$).
[ 0, 1, 0, 0, 0, 0 ]
Title: Re-parameterizing and reducing families of normal operators, Abstract: We present a new proof of results of Kurdyka & Paunescu, and of Rainer, about real-analytic multi-parameters generalizations of classical results by Rellich and Kato about the reduction in families of univariate deformations of normal operators over real or complex vector spaces of finite dimensions. Given a real analytic family of normal operators over a finite dimensional real or complex vector space, there exists a locally finite composition of blowings-up with smooth centers re-parameterizing the given family such that at each point of the source space of the re-parameterizing mapping, there exists a neighbourhood of any given point over which exists a real analytic orthonormal frame in which the pull back of the operator is in reduced form at every point of the neighbourhood. A free by-product of our proof is the local real analyticity of the eigen-values, which in all prior works was a prerequisite step to get local regular reducing bases.
[ 0, 0, 1, 0, 0, 0 ]
Title: Urban Scene Segmentation with Laser-Constrained CRFs, Abstract: Robots typically possess sensors of different modalities, such as colour cameras, inertial measurement units, and 3D laser scanners. Often, solving a particular problem becomes easier when more than one modality is used. However, while there are undeniable benefits to combine sensors of different modalities the process tends to be complicated. Segmenting scenes observed by the robot into a discrete set of classes is a central requirement for autonomy as understanding the scene is the first step to reason about future situations. Scene segmentation is commonly performed using either image data or 3D point cloud data. In computer vision many successful methods for scene segmentation are based on conditional random fields (CRF) where the maximum a posteriori (MAP) solution to the segmentation can be obtained by inference. In this paper we devise a new CRF inference method for scene segmentation that incorporates global constraints, enforcing the sets of nodes are assigned the same class label. To do this efficiently, the CRF is formulated as a relaxed quadratic program whose MAP solution is found using a gradient-based optimisation approach. The proposed method is evaluated on images and 3D point cloud data gathered in urban environments where image data provides the appearance features needed by the CRF, while the 3D point cloud data provides global spatial constraints over sets of nodes. Comparisons with belief propagation, conventional quadratic programming relaxation, and higher order potential CRF show the benefits of the proposed method.
[ 1, 0, 0, 0, 0, 0 ]
Title: The GAPS Programme with HARPS-N at TNG. XIII. The orbital obliquity of three close-in massive planets hosted by dwarf K-type stars: WASP-43, HAT-P-20 and Qatar-2, Abstract: In the framework of the GAPS project, we are conducting an observational programme aimed at the determination of the orbital obliquity of known transiting exoplanets. The targets are selected to probe the obliquity against a wide range of stellar and planetary physical parameters. We exploit high-precision radial velocity (RV) measurements, delivered by the HARPS-N spectrograph at the 3.6m Telescopio Nazionale Galileo, to measure the Rossiter-McLaughlin (RM) effect in RV time-series bracketing planet transits, and to refine the orbital parameters determinations with out-of-transit RV data. We also analyse new transit light curves obtained with several 1-2m class telescopes to better constrain the physical fundamental parameters of the planets and parent stars. We report here on new transit spectroscopic observations for three very massive close-in giant planets: WASP43b, HATP20b and Qatar2b orbiting dwarf K-type stars with effective temperature well below 5000K. We find lambda = 3.5pm6.8 deg for WASP43b and lambda = -8.0pm6.9 deg for HATP20b, while for Qatar2, our faintest target, the RM effect is only marginally detected, though our best-fit value lambda = 15pm20 deg is in agreement with a previous determination. In combination with stellar rotational periods derived photometrically, we estimate the true spin-orbit angle, finding that WASP43b is aligned while the orbit of HATP20b presents a small but significant obliquity (Psi=36 _{-12}^{+10} deg). By analyzing the CaII H&K chromospheric emission lines for HATP20 and WASP43, we find evidence for an enhanced level of stellar activity which is possibly induced by star-planet interactions.
[ 0, 1, 0, 0, 0, 0 ]
Title: Probabilistic Program Equivalence for NetKAT, Abstract: We tackle the problem of deciding whether two probabilistic programs are equivalent in Probabilistic NetKAT, a formal language for specifying and reasoning about the behavior of packet-switched networks. We show that the problem is decidable for the history-free fragment of the language by developing an effective decision procedure based on stochastic matrices. The main challenge lies in reasoning about iteration, which we address by designing an encoding of the program semantics as a finite-state absorbing Markov chain, whose limiting distribution can be computed exactly. In an extended case study on a real-world data center network, we automatically verify various quantitative properties of interest, including resilience in the presence of failures, by analyzing the Markov chain semantics.
[ 1, 0, 0, 0, 0, 0 ]
Title: Spatially Adaptive Colocalization Analysis in Dual-Color Fluorescence Microscopy, Abstract: Colocalization analysis aims to study complex spatial associations between bio-molecules via optical imaging techniques. However, existing colocalization analysis workflows only assess an average degree of colocalization within a certain region of interest and ignore the unique and valuable spatial information offered by microscopy. In the current work, we introduce a new framework for colocalization analysis that allows us to quantify colocalization levels at each individual location and automatically identify pixels or regions where colocalization occurs. The framework, referred to as spatially adaptive colocalization analysis (SACA), integrates a pixel-wise local kernel model for colocalization quantification and a multi-scale adaptive propagation-separation strategy for utilizing spatial information to detect colocalization in a spatially adaptive fashion. Applications to simulated and real biological datasets demonstrate the practical merits of SACA in what we hope to be an easily applicable and robust colocalization analysis method. In addition, theoretical properties of SACA are investigated to provide rigorous statistical justification.
[ 0, 0, 0, 1, 0, 0 ]
Title: On consistent vertex nomination schemes, Abstract: Given a vertex of interest in a network $G_1$, the vertex nomination problem seeks to find the corresponding vertex of interest (if it exists) in a second network $G_2$. A vertex nomination scheme produces a list of the vertices in $G_2$, ranked according to how likely they are judged to be the corresponding vertex of interest in $G_2$. The vertex nomination problem and related information retrieval tasks have attracted much attention in the machine learning literature, with numerous applications to social and biological networks. However, the current framework has often been confined to a comparatively small class of network models, and the concept of statistically consistent vertex nomination schemes has been only shallowly explored. In this paper, we extend the vertex nomination problem to a very general statistical model of graphs. Further, drawing inspiration from the long-established classification framework in the pattern recognition literature, we provide definitions for the key notions of Bayes optimality and consistency in our extended vertex nomination framework, including a derivation of the Bayes optimal vertex nomination scheme. In addition, we prove that no universally consistent vertex nomination schemes exist. Illustrative examples are provided throughout.
[ 0, 0, 0, 1, 0, 0 ]
Title: Converse passivity theorems, Abstract: Passivity is an imperative concept and a widely utilized tool in the analysis and control of interconnected systems. It naturally arises in the modelling of physical systems involving passive elements and dynamics. While many theorems on passivity are known in the theory of robust control, very few converse passivity results exist. This paper establishes various versions of converse passivity theorems for nonlinear feedback systems. In particular, open-loop passivity is shown to be necessary to ensure closed-loop passivity from an input-output perspective. Moreover, the stability of the feedback interconnection of a specific system with an arbitrary passive system is shown to imply passivity of the system itself.
[ 0, 0, 1, 0, 0, 0 ]
Title: Neural Networks retrieving Boolean patterns in a sea of Gaussian ones, Abstract: Restricted Boltzmann Machines are key tools in Machine Learning and are described by the energy function of bipartite spin-glasses. From a statistical mechanical perspective, they share the same Gibbs measure of Hopfield networks for associative memory. In this equivalence, weights in the former play as patterns in the latter. As Boltzmann machines usually require real weights to be trained with gradient descent like methods, while Hopfield networks typically store binary patterns to be able to retrieve, the investigation of a mixed Hebbian network, equipped with both real (e.g., Gaussian) and discrete (e.g., Boolean) patterns naturally arises. We prove that, in the challenging regime of a high storage of real patterns, where retrieval is forbidden, an extra load of Boolean patterns can still be retrieved, as long as the ratio among the overall load and the network size does not exceed a critical threshold, that turns out to be the same of the standard Amit-Gutfreund-Sompolinsky theory. Assuming replica symmetry, we study the case of a low load of Boolean patterns combining the stochastic stability and Hamilton-Jacobi interpolating techniques. The result can be extended to the high load by a non rigorous but standard replica computation argument.
[ 0, 1, 0, 0, 0, 0 ]
Title: Link colorings and the Goeritz matrix, Abstract: We discuss the connection between colorings of a link diagram and the Goeritz matrix.
[ 0, 0, 1, 0, 0, 0 ]
Title: Left-invariant Grauert tubes on SU(2), Abstract: Let M be a real analytic Riemannian manifold. An adapted complex structure on TM is a complex structure on a neighborhood of the zero section such that the leaves of the Riemann foliation are complex submanifolds. This structure is called entire if it may be extended to the whole of TM. We call such manifolds Grauert tubes, or simply tubes. We consider here the case of M = G a compact connected Lie group with a left-invariant metric, and try to determine for which such metrics the associated tube is entire. It is well-known that the Grauert tube of a bi-invariant metric on a Lie group is entire. The case of the smallest group SU(2) is treated completely, thanks to the complete integrability of the geodesic flow for such a metric, a standard result in classical mechanics. Along the way we find a new obstruction to tubes being entire which is made visible by the complete integrability. (New reference and exposition shortened, 11/17/2017.)
[ 0, 0, 1, 0, 0, 0 ]
Title: Ricci flow on cone surfaces and a three-dimensional expanding soliton, Abstract: The main objective of this thesis is the study of the evolution under the Ricci flow of surfaces with singularities of cone type. A second objective, emerged from the techniques we use, is the study of families of Ricci flow solitons in dimension 2 and 3. The Ricci flow is an evolution equation for Riemannian manifolds, introduced by R. Hamilton in 1982. It is from the achievements made by G. Perelman with this technique in 2002 when the Ricci flow has been established in a discipline itself, generating a great interest in the community. This thesis contains four original results. First result is a complete classification of solitons in smooth and cone surfaces. This classification completes the preceding results found by Hamilton, Chow and Wu and others, and we obtain explicit descriptions of all solitons in dimension 2. Second result is a Geometrization of cone surfaces by Ricci flow. This result, which uses the aforementioned first result, extends the theory of Hamilton to the singular case. This is the most comprehensive result in the thesis, for which we use and develop analysis and PDE techniques, as well as comparison geometry techniques. Third result is the existence of a Ricci flow that removes cone singularities. This clearly exposes the non-uniqueness of solutions to the flow , in analogy to the Ricci flow with cusps of P. Topping. The fourth result is the construction of a new expanding gradient Ricci soliton in dimension 3. Just as we do with solitons on cone surfaces, we give an explicit construction using techniques of phase portraits. We also prove that this is the only soliton with its topology and its lower bound of the curvature, and besides this is a critical case amongst all expanding solitons in dimension 3 with curvature bounded below.
[ 0, 0, 1, 0, 0, 0 ]
Title: Do metric fluctuations affect the Higgs dynamics during inflation?, Abstract: We show that the dynamics of the Higgs field during inflation is not affected by metric fluctuations if the Higgs is an energetically subdominant light spectator. For Standard Model parameters we find that couplings between Higgs and metric fluctuations are suppressed by $\mathcal{O}(10^{-7})$. They are negligible compared to both pure Higgs terms in the effective potential and the unavoidable non-minimal Higgs coupling to background scalar curvature. The question of the electroweak vacuum instability during high energy scale inflation can therefore be studied consistently using the Jordan frame action in a Friedmann--Lemaître--Robertson--Walker metric, where the Higgs-curvature coupling enters as an effective mass contribution. Similar results apply for other light spectator scalar fields during inflation.
[ 0, 1, 0, 0, 0, 0 ]
Title: Quantum Origami: Transversal Gates for Quantum Computation and Measurement of Topological Order, Abstract: In topology, a torus remains invariant under certain non-trivial transformations known as modular transformations. In the context of topologically ordered quantum states of matter, these transformations encode the braiding statistics and fusion rules of emergent anyonic excitations and thus serve as a diagnostic of topological order. Moreover, modular transformations of higher genus surfaces, e.g. a torus with multiple handles, can enhance the computational power of a topological state, in many cases providing a universal fault-tolerant set of gates for quantum computation. However, due to the intrusive nature of modular transformations, which abstractly involve global operations and manifold surgery, physical implementations of them in local systems have remained elusive. Here, we show that by folding manifolds, modular transformations can be applied in a single shot by independent local unitaries, providing a novel class of transversal logic gates for fault-tolerant quantum computation. Specifically, we demonstrate that multi-layer topological states with appropriate boundary conditions and twist defects allow modular transformations to be effectively implemented by a finite sequence of local SWAP gates between the layers. We further provide methods to directly measure the modular matrices, and thus the fractional statistics of anyonic excitations, providing a novel way to directly measure topological order.
[ 0, 1, 0, 0, 0, 0 ]
Title: A Decentralized Framework for Real-Time Energy Trading in Distribution Networks with Load and Generation Uncertainty, Abstract: The proliferation of small-scale renewable generators and price-responsive loads makes it a challenge for distribution network operators (DNOs) to schedule the controllable loads of the load aggregators and the generation of the generators in real-time. Additionally, the high computational burden and violation of the entities' (i.e., load aggregators' and generators') privacy make a centralized framework impractical. In this paper, we propose a decentralized energy trading algorithm that can be executed by the entities in a real-time fashion. To address the privacy issues, the DNO provides the entities with proper control signals using the Lagrange relaxation technique to motivate them towards an operating point with maximum profit for entities. To deal with uncertainty issues, we propose a probabilistic load model and robust framework for renewable generation. The performance of the proposed algorithm is evaluated on an IEEE 123-node test feeder. When compared with a benchmark of not performing load management for the aggregators, the proposed algorithm benefits both the load aggregators and generators by increasing their profit by 17.8%and 10.3%, respectively. When compared with a centralized approach, our algorithm converges to the solution of the DNO's centralized problem with a significantly lower running time in 50 iterations per time slot.
[ 1, 0, 0, 0, 0, 0 ]
Title: White Matter Network Architecture Guides Direct Electrical Stimulation Through Optimal State Transitions, Abstract: Electrical brain stimulation is currently being investigated as a therapy for neurological disease. However, opportunities to optimize such therapies are challenged by the fact that the beneficial impact of focal stimulation on both neighboring and distant regions is not well understood. Here, we use network control theory to build a model of brain network function that makes predictions about how stimulation spreads through the brain's white matter network and influences large-scale dynamics. We test these predictions using combined electrocorticography (ECoG) and diffusion weighted imaging (DWI) data who volunteered to participate in an extensive stimulation regimen. We posit a specific model-based manner in which white matter tracts constrain stimulation, defining its capacity to drive the brain to new states, including states associated with successful memory encoding. In a first validation of our model, we find that the true pattern of white matter tracts can be used to more accurately predict the state transitions induced by direct electrical stimulation than the artificial patterns of null models. We then use a targeted optimal control framework to solve for the optimal energy required to drive the brain to a given state. We show that, intuitively, our model predicts larger energy requirements when starting from states that are farther away from a target memory state. We then suggest testable hypotheses about which structural properties will lead to efficient stimulation for improving memory based on energy requirements. Our work demonstrates that individual white matter architecture plays a vital role in guiding the dynamics of direct electrical stimulation, more generally offering empirical support for the utility of network control theoretic models of brain response to stimulation.
[ 0, 0, 0, 0, 1, 0 ]
Title: Community detection in networks via nonlinear modularity eigenvectors, Abstract: Revealing a community structure in a network or dataset is a central problem arising in many scientific areas. The modularity function $Q$ is an established measure quantifying the quality of a community, being identified as a set of nodes having high modularity. In our terminology, a set of nodes with positive modularity is called a \textit{module} and a set that maximizes $Q$ is thus called \textit{leading module}. Finding a leading module in a network is an important task, however the dimension of real-world problems makes the maximization of $Q$ unfeasible. This poses the need of approximation techniques which are typically based on a linear relaxation of $Q$, induced by the spectrum of the modularity matrix $M$. In this work we propose a nonlinear relaxation which is instead based on the spectrum of a nonlinear modularity operator $\mathcal M$. We show that extremal eigenvalues of $\mathcal M$ provide an exact relaxation of the modularity measure $Q$, however at the price of being more challenging to be computed than those of $M$. Thus we extend the work made on nonlinear Laplacians, by proposing a computational scheme, named \textit{generalized RatioDCA}, to address such extremal eigenvalues. We show monotonic ascent and convergence of the method. We finally apply the new method to several synthetic and real-world data sets, showing both effectiveness of the model and performance of the method.
[ 1, 0, 0, 1, 0, 0 ]
Title: Scale-invariant unconstrained online learning, Abstract: We consider a variant of online convex optimization in which both the instances (input vectors) and the comparator (weight vector) are unconstrained. We exploit a natural scale invariance symmetry in our unconstrained setting: the predictions of the optimal comparator are invariant under any linear transformation of the instances. Our goal is to design online algorithms which also enjoy this property, i.e. are scale-invariant. We start with the case of coordinate-wise invariance, in which the individual coordinates (features) can be arbitrarily rescaled. We give an algorithm, which achieves essentially optimal regret bound in this setup, expressed by means of a coordinate-wise scale-invariant norm of the comparator. We then study general invariance with respect to arbitrary linear transformations. We first give a negative result, showing that no algorithm can achieve a meaningful bound in terms of scale-invariant norm of the comparator in the worst case. Next, we compliment this result with a positive one, providing an algorithm which "almost" achieves the desired bound, incurring only a logarithmic overhead in terms of the norm of the instances.
[ 1, 0, 0, 1, 0, 0 ]
Title: Tuning Pairing Amplitude and Spin-Triplet Texture by Curving Superconducting Nanostructures, Abstract: We investigate the nature of the superconducting state in curved nanostructures with Rashba spin-orbit coupling (RSOC). In bent nanostructures with inhomogeneous curvature we find a local enhancement or suppression of the superconducting order parameter, with the effect that can be tailored by tuning either the RSOC strength or the carrier density. Apart from the local superconducting spin-singlet amplitude control, the geometric curvature generates non-trivial textures of the spin-triplet pairs through a spatial variation of the d-vector. By employing the representative case of an elliptically deformed quantum ring, we demonstrate that the amplitude of the d-vector strongly depends on the strength of the local curvature and it generally exhibits a three-dimensional profile whose winding is tied to that of the single electron spin in the normal state. Our findings unveil novel paths to manipulate the quantum structure of the superconducting state in RSOC nanostructures through their geometry.
[ 0, 1, 0, 0, 0, 0 ]
Title: Thermal conductivity changes across a structural phase transition: the case of high-pressure silica, Abstract: By means of first-principles calculations, we investigate the thermal properties of silica as it evolves, under hydrostatic compression, from a stishovite phase into a CaCl$_2$-type structure. We compute the thermal conductivity tensor by solving the linearized Boltzmann transport equation iteratively in a wide temperature range, using for this the pressure-dependent harmonic and anharmonic interatomic couplings obtained from first principles. Most remarkably, we find that, at low temperatures, SiO$_2$ displays a large peak in the in-plane thermal conductivity and a highly anisotropic behavior close to the structural transformation. We trace back the origin of these features by analyzing the phonon contributions to the conductivity. We discuss the implications of our results in the general context of continuous structural transformations in solids, as well as the potential geological interest of our results for silica.
[ 0, 1, 0, 0, 0, 0 ]
Title: Enemy At the Gateways: A Game Theoretic Approach to Proxy Distribution, Abstract: A core technique used by popular proxy-based circumvention systems like Tor, Psiphon, and Lantern is to secretly share the IP addresses of circumvention proxies with the censored clients for them to be able to use such systems. For instance, such secretly shared proxies are known as bridges in Tor. However, a key challenge to this mechanism is the insider attack problem: censoring agents can impersonate as benign censored clients in order to obtain (and then block) such secretly shared circumvention proxies. In this paper, we perform a fundamental study on the problem of insider attack on proxy-based circumvention systems. We model the proxy distribution problem using game theory, based on which we derive the optimal strategies of the parties involved, i.e., the censors and circumvention system operators. That is, we derive the optimal proxy distribution mechanism of a circumvention system like Tor, against the censorship adversary who also takes his optimal censorship strategies. This is unlike previous works that design ad hoc mechanisms for proxy distribution, against non-optimal censors. We perform extensive simulations to evaluate our optimal proxy assignment algorithm under various adversarial and network settings. Comparing with the state-of-the-art prior work, we show that our optimal proxy assignment algorithm has superior performance, i.e., better resistance to censorship even against the strongest censorship adversary who takes her optimal actions. We conclude with lessons and recommendation for the design of proxy-based circumvention systems.
[ 1, 0, 0, 0, 0, 0 ]
Title: Mod-$p$ isogeny classes on Shimura varieties with parahoric level structure, Abstract: We study the special fiber of the integral models for Shimura varieties of Hodge type with parahoric level structure constructed by Kisin and Pappas in [KP]. We show that when the group is residually split, the points in the mod $p$ isogeny classes have the form predicted by the Langlands Rapoport conjecture in [LR]. We also verify most of the He-Rapoport axioms for these integral models without the residually split assumption. This allows us to prove that all Newton strata are non-empty for these models.
[ 0, 0, 1, 0, 0, 0 ]
Title: Bayesian Optimization with Gradients, Abstract: Bayesian optimization has been successful at global optimization of expensive-to-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to decrease the number of objective function evaluations required for good performance. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledge-gradient (dKG), for which we show one-step Bayes-optimality, asymptotic consistency, and greater one-step value of information than is possible in the derivative-free setting. Our procedure accommodates noisy and incomplete derivative information, comes in both sequential and batch forms, and can optionally reduce the computational cost of inference through automatically selected retention of a single directional derivative. We also compute the d-KG acquisition function and its gradient using a novel fast discretization-free technique. We show d-KG provides state-of-the-art performance compared to a wide range of optimization procedures with and without gradients, on benchmarks including logistic regression, deep learning, kernel learning, and k-nearest neighbors.
[ 1, 0, 1, 1, 0, 0 ]
Title: How to cut a cake with a gram matrix, Abstract: In this article we study the problem of fair division. In particular we study a notion introduced by J. Barbanel that generalizes super envy-free fair division. We give a new proof of his result. Our approach allows us to give an explicit bound for this kind of fair division. Furthermore, we also give a theoretical answer to an open problem posed by Barbanel in 1996. Roughly speaking, this question is: how can we decide if there exists a fair division satisfying some inequalities constraints? Furthermore, when all the measures are given with piecewise constant density functions then we show how to construct effectively such a fair division.
[ 1, 0, 0, 0, 0, 0 ]
Title: Magnetic MIMO Signal Processing and Optimization for Wireless Power Transfer, Abstract: In magnetic resonant coupling (MRC) enabled multiple-input multiple-output (MIMO) wireless power transfer (WPT) systems, multiple transmitters (TXs) each with one single coil are used to enhance the efficiency of simultaneous power transfer to multiple single-coil receivers (RXs) by constructively combining their induced magnetic fields at the RXs, a technique termed "magnetic beamforming". In this paper, we study the optimal magnetic beamforming design in a multi-user MIMO MRC-WPT system. We introduce the multi-user power region that constitutes all the achievable power tuples for all RXs, subject to the given total power constraint over all TXs as well as their individual peak voltage and current constraints. We characterize each boundary point of the power region by maximizing the sum-power deliverable to all RXs subject to their minimum harvested power constraints. For the special case without the TX peak voltage and current constraints, we derive the optimal TX current allocation for the single-RX setup in closed-form as well as that for the multi-RX setup. In general, the problem is a non-convex quadratically constrained quadratic programming (QCQP), which is difficult to solve. For the case of one single RX, we show that the semidefinite relaxation (SDR) of the problem is tight. For the general case with multiple RXs, based on SDR we obtain two approximate solutions by applying time-sharing and randomization, respectively. Moreover, for practical implementation of magnetic beamforming, we propose a novel signal processing method to estimate the magnetic MIMO channel due to the mutual inductances between TXs and RXs. Numerical results show that our proposed magnetic channel estimation and adaptive beamforming schemes are practically effective, and can significantly improve the power transfer efficiency and multi-user performance trade-off in MIMO MRC-WPT systems.
[ 1, 0, 0, 0, 0, 0 ]
Title: Bouncy Hybrid Sampler as a Unifying Device, Abstract: This work introduces a class of rejection-free Markov chain Monte Carlo (MCMC) samplers, named the Bouncy Hybrid Sampler, which unifies several existing methods from the literature. Examples include the Bouncy Particle Sampler of Peters and de With (2012), Bouchard-Cote et al. (2015) and the Hamiltonian MCMC. Following the introduced general framework, we derive a new sampler called the Quadratic Bouncy Hybrid Sampler. We apply this novel sampler to the problem of sampling from a truncated Gaussian distribution.
[ 0, 0, 0, 1, 0, 0 ]
Title: Lifshitz interaction can promote ice growth at water-silica interfaces, Abstract: At air-water interfaces, the Lifshitz interaction by itself does not promote ice growth. On the contrary, we find that the Lifshitz force promotes the growth of an ice film, up to 1-8 nm thickness, near silica-water interfaces at the triple point of water. This is achieved in a system where the combined effect of the retardation and the zero frequency mode influences the short-range interactions at low temperatures, contrary to common understanding. Cancellation between the positive and negative contributions in the Lifshitz spectral function is reversed in silica with high porosity. Our results provide a model for how water freezes on glass and other surfaces.
[ 0, 1, 0, 0, 0, 0 ]
Title: Reverse iterative volume sampling for linear regression, Abstract: We study the following basic machine learning task: Given a fixed set of $d$-dimensional input points for a linear regression problem, we wish to predict a hidden response value for each of the points. We can only afford to attain the responses for a small subset of the points that are then used to construct linear predictions for all points in the dataset. The performance of the predictions is evaluated by the total square loss on all responses (the attained as well as the hidden ones). We show that a good approximate solution to this least squares problem can be obtained from just dimension $d$ many responses by using a joint sampling technique called volume sampling. Moreover, the least squares solution obtained for the volume sampled subproblem is an unbiased estimator of optimal solution based on all n responses. This unbiasedness is a desirable property that is not shared by other common subset selection techniques. Motivated by these basic properties, we develop a theoretical framework for studying volume sampling, resulting in a number of new matrix expectation equalities and statistical guarantees which are of importance not only to least squares regression but also to numerical linear algebra in general. Our methods also lead to a regularized variant of volume sampling, and we propose the first efficient algorithms for volume sampling which make this technique a practical tool in the machine learning toolbox. Finally, we provide experimental evidence which confirms our theoretical findings.
[ 0, 0, 0, 1, 0, 0 ]
Title: Learning from Complementary Labels, Abstract: Collecting labeled data is costly and thus a critical bottleneck in real-world classification tasks. To mitigate this problem, we propose a novel setting, namely learning from complementary labels for multi-class classification. A complementary label specifies a class that a pattern does not belong to. Collecting complementary labels would be less laborious than collecting ordinary labels, since users do not have to carefully choose the correct class from a long list of candidate classes. However, complementary labels are less informative than ordinary labels and thus a suitable approach is needed to better learn from them. In this paper, we show that an unbiased estimator to the classification risk can be obtained only from complementarily labeled data, if a loss function satisfies a particular symmetric condition. We derive estimation error bounds for the proposed method and prove that the optimal parametric convergence rate is achieved. We further show that learning from complementary labels can be easily combined with learning from ordinary labels (i.e., ordinary supervised learning), providing a highly practical implementation of the proposed method. Finally, we experimentally demonstrate the usefulness of the proposed methods.
[ 1, 0, 0, 1, 0, 0 ]
Title: Dusty winds in active galactic nuclei: reconciling observations with models, Abstract: This letter presents a revised radiative transfer model for the infrared (IR) emission of active galactic nuclei (AGN). While current models assume that the IR is emitted from a dusty torus in the equatorial plane of the AGN, spatially resolved observations indicate that the majority of the IR emission from 100 pc in many AGN originates from the polar region, contradicting classical torus models. The new model CAT3D-WIND builds upon the suggestion that the dusty gas around the AGN consists of an inflowing disk and an outflowing wind. Here, it is demonstrated that (1) such disk+wind models cover overall a similar parameter range of observed spectral features in the IR as classical clumpy torus models, e.g. the silicate feature strengths and mid-IR spectral slopes, (2) they reproduce the 3-5{\mu}m bump observed in many type 1 AGN unlike torus models, and (3) they are able to explain polar emission features seen in IR interferometry, even for type 1 AGN at relatively low inclination, as demonstrated for NGC3783. These characteristics make it possible to reconcile radiative transfer models with observations and provide further evidence of a two-component parsec-scaled dusty medium around AGN: the disk gives rise to the 3-5{\mu}m near-IR component, while the wind produces the mid-IR emission. The model SEDs will be made available for download.
[ 0, 1, 0, 0, 0, 0 ]
Title: Continuity of Utility Maximization under Weak Convergence, Abstract: In this paper we find sufficient conditions for the continuity of the value of the utility maximization problem from terminal wealth with respect to the convergence in distribution of the underlying processes. We provide several examples which illustrate that without these conditions, we cannot generally expect continuity to hold. Finally, we apply our results to the computation of the minimum shortfall in the Heston model by building an appropriate lattice approximation.
[ 0, 0, 0, 0, 0, 1 ]
Title: Introducing the anatomy of disciplinary discernment: an example from astronomy, Abstract: Education is increasingly being framed by a competence mindset; the value of knowledge lies much more in competence performativity and innovation than in simply knowing. Reaching such competency in areas such as astronomy and physics has long been known to be challenging. The movement from everyday conceptions of the world around us to a disciplinary interpretation is fraught with pitfalls and problems. Thus, what underpins the characteristics of the disciplinary trajectory to competence becomes an important educational consideration. In this article we report on a study involving what students and lecturers discern from the same disciplinary semiotic resource. We use this to propose an Anatomy of Disciplinary Discernment (ADD), a hierarchy of what is focused on and how it is interpreted in an appropriate, disciplinary manner, as an overarching fundamental aspect of disciplinary learning. Students and lecturers in astronomy and physics were asked to describe what they could discern from a video simulation of travel through our Galaxy and beyond. 137 people from nine countries participated. The descriptions were analysed using a hermeneutic interpretive study approach. The analysis resulted in the formulation of five qualitatively different categories of discernment; the ADD, reflecting a view of participants' competence levels. The ADD reveals four increasing levels of disciplinary discernment: Identification, Explanation, Appreciation, and Evaluation. This facilitates the identification of a clear relationship between educational level and the level of disciplinary discernment. The analytical outcomes of the study suggest how teachers of science, after using the ADD to assess the students disciplinary knowledge, may attain new insights into how to create more effective learning environments by explicitly crafting their teaching to support the crossing of boundaries in the ADD model.
[ 0, 1, 0, 0, 0, 0 ]
Title: Well-posedness and dispersive decay of small data solutions for the Benjamin-Ono equation, Abstract: This article represents a first step toward understanding the long time dynamics of solutions for the Benjamin-Ono equation. While this problem is known to be both completely integrable and globally well-posed in $L^2$, much less seems to be known concerning its long time dynamics. Here, we prove that for small localized data the solutions have (nearly) dispersive dynamics almost globally in time. An additional objective is to revisit the $L^2$ theory for the Benjamin-Ono equation and provide a simpler, self-contained approach.
[ 0, 0, 1, 0, 0, 0 ]
Title: A vehicle-to-infrastructure communication based algorithm for urban traffic control, Abstract: We present in this paper a new algorithm for urban traffic light control with mixed traffic (communicating and non communicating vehicles) and mixed infrastructure (equipped and unequipped junctions). We call equipped junction here a junction with a traffic light signal (TLS) controlled by a road side unit (RSU). On such a junction, the RSU manifests its connectedness to equipped vehicles by broadcasting its communication address and geographical coordinates. The RSU builds a map of connected vehicles approaching and leaving the junction. The algorithm allows the RSU to select a traffic phase, based on the built map. The selected traffic phase is applied by the TLS; and both equipped and unequipped vehicles must respect it. The traffic management is in feedback on the traffic demand of communicating vehicles. We simulated the vehicular traffic as well as the communications. The two simulations are combined in a closed loop with visualization and monitoring interfaces. Several indicators on vehicular traffic (mean travel time, ended vehicles) and IEEE 802.11p communication performances (end-to-end delay, throughput) are derived and illustrated in three dimension maps. We then extended the traffic control to a urban road network where we also varied the number of equipped junctions. Other indicators are shown for road traffic performances in the road network case, where high gains are experienced in the simulation results.
[ 1, 0, 1, 0, 0, 0 ]
Title: Some Theorems on Optimality of a Single Observation Confidence Interval for the Mean of a Normal Distribution, Abstract: We consider the problem of finding a proper confidence interval for the mean based on a single observation from a normal distribution with both mean and variance unknown. Portnoy (2017) characterizes the scale-sign invariant rules and shows that the Hunt-Stein construction provides a randomized invariant rule that improves on any given randomized rule in the sense that it has greater minimal coverage among all procedures with a fixed expected length. Mathematical results here provide a specific mixture of two non-randomized invariant rules that achieve the minimax optimality. A multivariate confidence set based on a single observation vector is also developed.
[ 0, 0, 1, 1, 0, 0 ]
Title: Reliability and applicability of magnetic force linear response theory: Numerical parameters, predictability, and orbital resolution, Abstract: We investigated the reliability and applicability of so-called magnetic force linear response method to calculate spin-spin interaction strengths from first-principles. We examined the dependence on the numerical parameters including the number of basis orbitals and their cutoff radii within non-orthogonal LCPAO (linear combination of pseudo-atomic orbitals) formalism. It is shown that the parameter dependence and the ambiguity caused by these choices are small enough in comparison to the other computation approach and experiments. Further, we tried to pursue the possible extension of this technique to a wider range of applications. We showed that magnetic force theorem can provide the reasonable estimation especially for the case of strongly localized moments even when the ground state configuration is unknown or the total energy value is not accessible. The formalism is extended to carry the orbital resolution from which the matrix form of the magnetic coupling constant is calculated. From the applications to Fe-based superconductors including LaFeAsO, NaFeAs, BaFe$_2$As$_2$ and FeTe, the distinctive characteristics of orbital-resolved interactions are clearly noticed in between single-stripe pnictides and double-stripe chalcogenides.
[ 0, 1, 0, 0, 0, 0 ]
Title: Verifying Probabilistic Timed Automata Against Omega-Regular Dense-Time Properties, Abstract: Probabilistic timed automata (PTAs) are timed automata (TAs) extended with discrete probability distributions.They serve as a mathematical model for a wide range of applications that involve both stochastic and timed behaviours. In this work, we consider the problem of model-checking linear \emph{dense-time} properties over {PTAs}. In particular, we study linear dense-time properties that can be encoded by TAs with infinite acceptance criterion.First, we show that the problem of model-checking PTAs against deterministic-TA specifications can be solved through a product construction. Based on the product construction, we prove that the computational complexity of the problem with deterministic-TA specifications is EXPTIME-complete. Then we show that when relaxed to general (nondeterministic) TAs, the model-checking problem becomes undecidable.Our results substantially extend state of the art with both the dense-time feature and the nondeterminism in TAs.
[ 1, 0, 0, 0, 0, 0 ]
Title: On the number of integer polynomials with multiplicatively dependent roots, Abstract: In this paper, we give some counting results on integer polynomials of fixed degree and bounded height whose distinct non-zero roots are multiplicatively dependent. These include sharp lower bounds, upper bounds and asymptotic formulas for various cases, although in general there is a logarithmic gap between lower and upper bounds.
[ 0, 0, 1, 0, 0, 0 ]
Title: New irreducible tensor product modules for the Virasoro algebra, Abstract: In this paper, we obtain a class of Virasoro modules by taking tensor products of the irreducible Virasoro modules $\Omega(\lambda,\alpha,h)$ defined in \cite{CG}, with irreducible highest weight modules $V(\theta,h)$ or with irreducible Virasoro modules Ind$_{\theta}(N)$ defined in \cite{MZ2}. We obtain the necessary and sufficient conditions for such tensor product modules to be irreducible, and determine the necessary and sufficient conditions for two of them to be isomorphic. These modules are not isomorphic to any other known irreducible Virasoro modules.
[ 0, 0, 1, 0, 0, 0 ]
Title: Polarization dynamics in a photon BEC, Abstract: It has previously been shown that a dye-filled microcavity can produce a Bose-Einstein condensate of photons. Thermalization of photons is possible via repeated absorption and re-emission by the dye molecules. In this paper, we theoretically explore the behavior of the polarization of light in this system. We find that in contrast to the near complete thermalization between different spatial modes of light, thermalization of polarization states is expected to generally be incomplete. We show that the polarization degree changes significantly from below to above threshold, and explain the dependence of polarization on all relevant material parameters.
[ 0, 1, 0, 0, 0, 0 ]
Title: Combining symmetry breaking and restoration with configuration interaction: extension to z-signature symmetry in the case of the Lipkin Model, Abstract: Background: Ab initio many-body methods whose numerical cost scales polynomially with the number of particles have been developed over the past fifteen years to tackle closed-shell mid-mass nuclei. Open-shell nuclei have been further addressed by implementing variants based on the concept of spontaneous symmetry breaking (and restoration). Purpose: In order to access the spectroscopy of open-shell nuclei more systematically while controlling the numerical cost, we design a novel many-body method that combines the merit of breaking and restoring symmetries with those brought about by low-rank individual excitations. Methods: The recently proposed truncated configuration-interaction method based on optimized symmetry-broken and -restored states is extended to the z-signature symmetry associated with a discrete subgroup of SU(2). The highly-truncated N-body Hilbert subspace within which the Hamiltonian is diagonalized is spanned by a z-signature broken and restored Slater determinant vacuum and associated low-rank excitations. Results: The proposed method provides an excellent reproduction of the ground-state energy and of low-lying excitation energies of various z-signatures and total angular momenta. In doing so, the successive benefits of (i) breaking the symmetry, (ii) restoring the symmetry, (iii) including low-rank particle-hole excitations and (iv) optimizing the amount by which the underlying vacuum breaks the symmetry are illustrated. Conclusions: The numerical cost of the newly designed variational method is polynomial with respect to the system size. The present study confirms the results obtained previously for the attractive pairing Hamiltonian in connection with the breaking and restoration of U(1) global gauge symmetry. These two studies constitute a strong motivation to apply this method to realistic nuclear Hamiltonians.
[ 0, 1, 0, 0, 0, 0 ]
Title: Nonlocal Cauchy problems for wave equations and applications, Abstract: In this paper, the existence, the uniqueness and estimates of solution to the integral Cauchy problem for linear and nonlinear abstract wave equations are proved. The equation includes a linear operator A defined in a Banach space E, in which by choosing E and A we can obtain numerous classis of nonlocal initial value problems for wave equations which occur in a wide variety of physical systems.
[ 0, 0, 1, 0, 0, 0 ]
Title: A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables, Abstract: We propose a gradient-based method for quadratic programming problems with a single linear constraint and bounds on the variables. Inspired by the GPCG algorithm for bound-constrained convex quadratic programming [J.J. Moré and G. Toraldo, SIAM J. Optim. 1, 1991], our approach alternates between two phases until convergence: an identification phase, which performs gradient projection iterations until either a candidate active set is identified or no reasonable progress is made, and an unconstrained minimization phase, which reduces the objective function in a suitable space defined by the identification phase, by applying either the conjugate gradient method or a recently proposed spectral gradient method. However, the algorithm differs from GPCG not only because it deals with a more general class of problems, but mainly for the way it stops the minimization phase. This is based on a comparison between a measure of optimality in the reduced space and a measure of bindingness of the variables that are on the bounds, defined by extending the concept of proportioning, which was proposed by some authors for box-constrained problems. If the objective function is bounded, the algorithm converges to a stationary point thanks to a suitable application of the gradient projection method in the identification phase. For strictly convex problems, the algorithm converges to the optimal solution in a finite number of steps even in case of degeneracy. Extensive numerical experiments show the effectiveness of the proposed approach.
[ 0, 0, 1, 0, 0, 0 ]
Title: Euler characteristic and Akashi series for Selmer groups over global function fields, Abstract: Let $A$ be an abelian variety defined over a global function field $F$ of positive characteristic $p$ and let $K/F$ be a $p$-adic Lie extension with Galois group $G$. We provide a formula for the Euler characteristic $\chi(G,Sel_A(K)_p)$ of the $p$-part of the Selmer group of $A$ over $K$. In the special case $G=\mathbb{Z}_p^d$ and $A$ a constant ordinary variety, using Akashi series, we show how the Euler characteristic of the dual of $Sel_A(K)_p$ is related to special values of a $p$-adic $\mathcal{L}$-function.
[ 0, 0, 1, 0, 0, 0 ]
Title: The effect of boundary conditions on mixing of 2D Potts models at discontinuous phase transitions, Abstract: We study Swendsen--Wang dynamics for the critical $q$-state Potts model on the square lattice. For $q=2,3,4$, where the phase transition is continuous, the mixing time $t_{\textrm{mix}}$ is expected to obey a universal power-law independent of the boundary conditions. On the other hand, for large $q$, where the phase transition is discontinuous, the authors recently showed that $t_{\textrm{mix}}$ is highly sensitive to boundary conditions: $t_{\textrm{mix}} \geq \exp(cn)$ on an $n\times n$ box with periodic boundary, yet under free or monochromatic boundary conditions, $t_{\textrm{mix}} \leq\exp(n^{o(1)})$. In this work we classify this effect under boundary conditions that interpolate between these two (torus vs. free/monochromatic). Specifically, if one of the $q$ colors is red, mixed boundary conditions such as red-free-red-free on the 4 sides of the box induce $t_{\textrm{mix}} \geq \exp(cn)$, yet Dobrushin boundary conditions such as red-red-free-free, as well as red-periodic-red-periodic, induce sub-exponential mixing.
[ 0, 0, 1, 0, 0, 0 ]
Title: Number of thermodynamic states in the three-dimensional Edwards-Anderson spin glass, Abstract: The question of the number of thermodynamic states present in the low-temperature phase of the three-dimensional Edwards-Anderson Ising spin glass is addressed by studying spin and link overlap distributions using population annealing Monte Carlo simulations. We consider overlaps between systems with the same boundary condition-which are the usual quantities measured-and also overlaps between systems with different boundary conditions, both for the full systems and also within a smaller window within the system. Our results appear to be fully compatible with a single pair of pure states such as in the droplet/scaling picture. However, our results for whether or not domain walls induced by changing boundary conditions are space filling or not are also compatible with scenarios having many thermodynamic states, such as the chaotic pairs picture and the replica symmetry breaking picture. The differing results for spin overlaps in same and different boundary conditions suggest that finite-size effects are very large for the system sizes currently accessible in low-temperature simulations.
[ 0, 1, 0, 0, 0, 0 ]
Title: Mechanics of disordered auxetic metamaterials, Abstract: Auxetic materials are of great engineering interest not only because of their fascinating negative Poisson's ratio, but also due to their increased toughness and indentation resistance. These materials are typically synthesized polyester foams with a very heterogeneous structure, but the role of disorder in auxetic behavior is not fully understood. Here, we provide a systematic theoretical and experimental investigation in to the effect of disorder on the mechanical properties of a paradigmatic auxetic lattice with a re-entrant hexagonal geometry. We show that disorder has a marginal effect on the Poisson's ratio unless the lattice topology is altered, and in all cases examined the disorder preserves the auxetic characteristics. Depending on the direction of loading applied to these disordered auxetic lattices, either brittle or ductile failure is observed. It is found that brittle failure is associated with a disorder-dependent tensile strength, whereas in ductile failure disorder does not affect strength. Our work thus provides general guidelines to optimize elasticity and strength of disordered auxetic metamaterials.
[ 0, 1, 0, 0, 0, 0 ]
Title: Coherent Oscillations of Driven rf SQUID Metamaterials, Abstract: Through experiments and numerical simulations we explore the behavior of rf SQUID (radio frequency superconducting quantum interference device) metamaterials, which show extreme tunability and nonlinearity. The emergent electromagnetic properties of this metamaterial are sensitive to the degree of coherent response of the driven interacting SQUIDs. Coherence suffers in the presence of disorder, which is experimentally found to be mainly due to a dc flux gradient. We demonstrate methods to recover the coherence, specifically by varying the coupling between the SQUID meta-atoms and increasing the temperature or the amplitude of the applied rf flux.
[ 0, 1, 0, 0, 0, 0 ]
Title: Localizing virtual structure sheaves by cosections, Abstract: We construct a cosection localized virtual structure sheaf when a Deligne-Mumford stack is equipped with a perfect obstruction theory and a cosection of the obstruction sheaf.
[ 0, 0, 1, 0, 0, 0 ]
Title: Regularized Greedy Column Subset Selection, Abstract: The Column Subset Selection Problem provides a natural framework for unsupervised feature selection. Despite being a hard combinatorial optimization problem, there exist efficient algorithms that provide good approximations. The drawback of the problem formulation is that it incorporates no form of regularization, and is therefore very sensitive to noise when presented with scarce data. In this paper we propose a regularized formulation of this problem, and derive a correct greedy algorithm that is similar in efficiency to existing greedy methods for the unregularized problem. We study its adequacy for feature selection and propose suitable formulations. Additionally, we derive a lower bound for the error of the proposed problems. Through various numerical experiments on real and synthetic data, we demonstrate the significantly increased robustness and stability of our method, as well as the improved conditioning of its output, all while remaining efficient for practical use.
[ 0, 0, 0, 1, 0, 0 ]
Title: The Cartan Algorithm in Five Dimensions, Abstract: In this paper we introduce an algorithm to determine the equivalence of five dimensional spacetimes, which generalizes the Karlhede algorithm for four dimensional general relativity. As an alternative to the Petrov type classification, we employ the alignment classification to algebraically classify the Weyl tensor. To illustrate the algorithm we discuss three examples: the singly rotating Myers-Perry solution, the Kerr (anti) de Sitter solution, and the rotating black ring solution. We briefly discuss some applications of the Cartan algorithm in five dimensions.
[ 0, 0, 1, 0, 0, 0 ]
Title: Uniform convergence for the incompressible limit of a tumor growth model, Abstract: We study a model introduced by Perthame and Vauchelet that describes the growth of a tumor governed by Brinkman's Law, which takes into account friction between the tumor cells. We adopt the viscosity solution approach to establish an optimal uniform convergence result of the tumor density as well as the pressure in the incompressible limit. The system lacks standard maximum principle, and thus modification of the usual approach is necessary.
[ 0, 0, 1, 0, 0, 0 ]
Title: Clustering in Hilbert space of a quantum optimization problem, Abstract: The solution space of many classical optimization problems breaks up into clusters which are extensively distant from one another in the Hamming metric. Here, we show that an analogous quantum clustering phenomenon takes place in the ground state subspace of a certain quantum optimization problem. This involves extending the notion of clustering to Hilbert space, where the classical Hamming distance is not immediately useful. Quantum clusters correspond to macroscopically distinct subspaces of the full quantum ground state space which grow with the system size. We explicitly demonstrate that such clusters arise in the solution space of random quantum satisfiability (3-QSAT) at its satisfiability transition. We estimate both the number of these clusters and their internal entropy. The former are given by the number of hardcore dimer coverings of the core of the interaction graph, while the latter is related to the underconstrained degrees of freedom not touched by the dimers. We additionally provide new numerical evidence suggesting that the 3-QSAT satisfiability transition may coincide with the product satisfiability transition, which would imply the absence of an intermediate entangled satisfiable phase.
[ 1, 1, 0, 0, 0, 0 ]
Title: Political Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship, Abstract: Echo chambers, i.e., situations where one is exposed only to opinions that agree with their own, are an increasing concern for the political discourse in many democratic countries. This paper studies the phenomenon of political echo chambers on social media. We identify the two components in the phenomenon: the opinion that is shared ('echo'), and the place that allows its exposure ('chamber' --- the social network), and examine closely at how these two components interact. We define a production and consumption measure for social-media users, which captures the political leaning of the content shared and received by them. By comparing the two, we find that Twitter users are, to a large degree, exposed to political opinions that agree with their own. We also find that users who try to bridge the echo chambers, by sharing content with diverse leaning, have to pay a 'price of bipartisanship' in terms of their network centrality and content appreciation. In addition, we study the role of 'gatekeepers', users who consume content with diverse leaning but produce partisan content (with a single-sided leaning), in the formation of echo chambers. Finally, we apply these findings to the task of predicting partisans and gatekeepers from social and content features. While partisan users turn out relatively easy to identify, gatekeepers prove to be more challenging.
[ 1, 0, 0, 0, 0, 0 ]
Title: Resonance enhancement of two photon absorption by magnetically trapped atoms in strong rf-fields, Abstract: Applying a many mode Floquet formalism for magnetically trapped atoms interacting with a polychromatic rf-field, we predict a large two photon transition probability in the atomic system of cold $^{87}Rb$ atoms. The physical origin of this enormous increase in the two photon transition probability is due to the formation of avoided crossings between eigen-energy levels originating from different Floquet sub-manifolds and redistribution of population in the resonant intermediate levels to give rise to the resonance enhancement effect. Other exquisite features of the studied atom-field composite system include the splitting of the generated avoided crossings at the strong field strength limit and a periodic variation of the single and two photon transition probabilities with the mode separation frequency of the polychromatic rf-field. This work can find applications to characterize properties of cold atom clouds in the magnetic traps using rf-spectroscopy techniques.
[ 0, 1, 0, 0, 0, 0 ]
Title: Unveiling the AGN in IC 883: discovery of a parsec-scale radio jet, Abstract: IC883 is a luminous infrared galaxy (LIRG) classified as a starburst-active galactic nucleus (AGN) composite. In a previous study we detected a low-luminosity AGN (LLAGN) radio candidate. Here we report on our radio follow-up at three frequencies which provides direct and unequivocal evidence of the AGN activity in IC883. Our analysis of archival X-ray data, together with the detection of a transient radio source with luminosity typical of bright supernovae, give further evidence of the ongoing star formation activity, which dominates the energetics of the system. At sub-parsec scales, the radio nucleus has a core-jet morphology with the jet being a newly ejected component showing a subluminal proper motion of 0.6c-1c. The AGN contributes less than two per cent of the total IR luminosity of the system. The corresponding Eddington factor is ~1E-3, suggesting this is a low-accretion rate engine, as often found in LLAGNs. However, its high bolometric luminosity (~10E44erg/s) agrees better with a normal AGN. This apparent discrepancy may just be an indication of the transition nature of the nucleus from a system dominated by star-formation, to an AGN-dominated system. The nucleus has a strongly inverted spectrum and a turnover at ~4.4GHz, thus qualifying as a candidate for the least luminous (L_5.0GHz ~ 6.3E28erg/s/Hz) and one of the youngest (~3000yr) gigahertz-peaked spectrum (GPS) sources. If the GPS origin for the IC883 nucleus is confirmed, then advanced mergers in the LIRG category are potentially key environments to unveil the evolution of GPS sources into more powerful radio galaxies.
[ 0, 1, 0, 0, 0, 0 ]
Title: Counting triangles, tunable clustering and the small-world property in random key graphs (Extended version), Abstract: Random key graphs were introduced to study various properties of the Eschenauer-Gligor key predistribution scheme for wireless sensor networks (WSNs). Recently this class of random graphs has received much attention in contexts as diverse as recommender systems, social network modeling, and clustering and classification analysis. This paper is devoted to analyzing various properties of random key graphs. In particular, we establish a zero-one law for the the existence of triangles in random key graphs, and identify the corresponding critical scaling. This zero-one law exhibits significant differences with the corresponding result in Erdos-Renyi (ER) graphs. We also compute the clustering coefficient of random key graphs, and compare it to that of ER graphs in the many node regime when their expected average degrees are asymptotically equivalent. For the parameter range of practical relevance in both wireless sensor network and social network applications, random key graphs are shown to be much more clustered than the corresponding ER graphs. We also explore the suitability of random key graphs as small world models in the sense of Watts and Strogatz.
[ 1, 0, 1, 0, 0, 0 ]
Title: Convergence Rates of Variational Posterior Distributions, Abstract: We study convergence rates of variational posterior distributions for nonparametric and high-dimensional inference. We formulate general conditions on prior, likelihood, and variational class that characterize the convergence rates. Under similar "prior mass and testing" conditions considered in the literature, the rate is found to be the sum of two terms. The first term stands for the convergence rate of the true posterior distribution, and the second term is contributed by the variational approximation error. For a class of priors that admit the structure of a mixture of product measures, we propose a novel prior mass condition, under which the variational approximation error of the generalized mean-field class is dominated by convergence rate of the true posterior. We demonstrate the applicability of our general results for various models, prior distributions and variational classes by deriving convergence rates of the corresponding variational posteriors.
[ 0, 0, 1, 1, 0, 0 ]
Title: V773 Cas, QS Aql, and BR Ind: Eclipsing Binaries as Parts of Multiple Systems, Abstract: Eclipsing binaries remain crucial objects for our understanding of the universe. In particular, those that are components of multiple systems can help us solve the problem of the formation of these systems. Analysis of the radial velocities together with the light curve produced for the first time precise physical parameters of the components of the multiple systems V773 Cas, QS Aql, and BR Ind. Their visual orbits were also analyzed, which resulted in slightly improved orbital elements. What is typical for all these systems is that their most dominant source is the third distant component. The system V773 Cas consists of two similar G1-2V stars revolving in a circular orbit and a more distant component of the A3V type. Additionally, the improved value of parallax was calculated to be 17.6 mas. Analysis of QS Aql resulted in the following: the inner eclipsing pair is composed of B6V and F1V stars, and the third component is of about the B6 spectral type. The outer orbit has high eccentricity of about 0.95, and observations near its upcoming periastron passage between the years 2038 and 2040 are of high importance. Also, the parallax of the system was derived to be about 2.89 mas, moving the star much closer to the Sun than originally assumed. The system BR Ind was found to be a quadruple star consisting of two eclipsing K dwarfs orbiting each other with a period of 1.786 days; the distant component is a single-lined spectroscopic binary with an orbital period of about 6 days. Both pairs are moving around each other on their 148 year orbit.
[ 0, 1, 0, 0, 0, 0 ]
Title: Variance bounding of delayed-acceptance kernels, Abstract: A delayed-acceptance version of a Metropolis--Hastings algorithm can be useful for Bayesian inference when it is computationally expensive to calculate the true posterior, but a computationally cheap approximation is available; the delayed-acceptance kernel targets the same posterior as its parent Metropolis-Hastings kernel. Although the asymptotic variance of any functional of the chain cannot be less than that obtained using its parent, the average computational time per iteration can be much smaller and so for a given computational budget the delayed-acceptance kernel can be more efficient. When the asymptotic variance of all $L^2$ functionals of the chain is finite, the kernel is said to be variance bounding. It has recently been noted that a delayed-acceptance kernel need not be variance bounding even when its parent is. We provide sufficient conditions for inheritance: for global algorithms, such as the independence sampler, the error in the approximation should be bounded; for local algorithms, two alternative sets of conditions are provided. As a by-product of our initial, general result we also supply sufficient conditions on any pair of proposals such that, for any shared target distribution, if a Metropolis-Hastings kernel using one of the proposals is variance bounding then so is the Metropolis-Hastings kernel using the other proposal.
[ 0, 0, 1, 1, 0, 0 ]
Title: Positive Herz-Schur multipliers and approximation properties of crossed products, Abstract: For a $C^*$-algebra $A$ and a set $X$ we give a Stinespring-type characterisation of the completely positive Schur $A$-multipliers on $K(\ell^2(X))\otimes A$. We then relate them to completely positive Herz-Schur multipliers on $C^*$-algebraic crossed products of the form $A\rtimes_{\alpha,r} G$, with $G$ a discrete group, whose various versions were considered earlier by Anantharaman-Delaroche, Bédos and Conti, and Dong and Ruan. The latter maps are shown to implement approximation properties, such as nuclearity or the Haagerup property, for $A\rtimes_{\alpha,r} G$.
[ 0, 0, 1, 0, 0, 0 ]
Title: Bias-Variance Tradeoff of Graph Laplacian Regularizer, Abstract: This paper presents a bias-variance tradeoff of graph Laplacian regularizer, which is widely used in graph signal processing and semi-supervised learning tasks. The scaling law of the optimal regularization parameter is specified in terms of the spectral graph properties and a novel signal-to-noise ratio parameter, which suggests selecting a mediocre regularization parameter is often suboptimal. The analysis is applied to three applications, including random, band-limited, and multiple-sampled graph signals. Experiments on synthetic and real-world graphs demonstrate near-optimal performance of the established analysis.
[ 1, 0, 0, 1, 0, 0 ]
Title: Dual Discriminator Generative Adversarial Nets, Abstract: We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these divergences to effectively diversify the estimated density in capturing multi-modes. We term our method dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has two discriminators; and together with a generator, it also has the analogy of a minimax game, wherein a discriminator rewards high scores for samples from data distribution whilst another discriminator, conversely, favoring data from the generator, and the generator produces data to fool both two discriminators. We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem. We conduct extensive experiments on synthetic and real-world large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have made our best effort to compare our D2GAN with the latest state-of-the-art GAN's variants in comprehensive qualitative and quantitative evaluations. The experimental results demonstrate the competitive and superior performance of our approach in generating good quality and diverse samples over baselines, and the capability of our method to scale up to ImageNet database.
[ 1, 0, 0, 1, 0, 0 ]
Title: A Compressive Sensing Approach to Community Detection with Applications, Abstract: The community detection problem for graphs asks one to partition the n vertices V of a graph G into k communities, or clusters, such that there are many intracluster edges and few intercluster edges. Of course this is equivalent to finding a permutation matrix P such that, if A denotes the adjacency matrix of G, then PAP^T is approximately block diagonal. As there are k^n possible partitions of n vertices into k subsets, directly determining the optimal clustering is clearly infeasible. Instead one seeks to solve a more tractable approximation to the clustering problem. In this paper we reformulate the community detection problem via sparse solution of a linear system associated with the Laplacian of a graph G and then develop a two-stage approach based on a thresholding technique and a compressive sensing algorithm to find a sparse solution which corresponds to the community containing a vertex of interest in G. Crucially, our approach results in an algorithm which is able to find a single cluster of size n_0 in O(nlog(n)n_0) operations and all k clusters in fewer than O(n^2ln(n)) operations. This is a marked improvement over the classic spectral clustering algorithm, which is unable to find a single cluster at a time and takes approximately O(n^3) operations to find all k clusters. Moreover, we are able to provide robust guarantees of success for the case where G is drawn at random from the Stochastic Block Model, a popular model for graphs with clusters. Extensive numerical results are also provided, showing the efficacy of our algorithm on both synthetic and real-world data sets.
[ 1, 0, 0, 1, 0, 0 ]
Title: Uncertainty quantification for radio interferometric imaging: I. proximal MCMC methods, Abstract: Uncertainty quantification is a critical missing component in radio interferometric imaging that will only become increasingly important as the big-data era of radio interferometry emerges. Since radio interferometric imaging requires solving a high-dimensional, ill-posed inverse problem, uncertainty quantification is difficult but also critical to the accurate scientific interpretation of radio observations. Statistical sampling approaches to perform Bayesian inference, like Markov Chain Monte Carlo (MCMC) sampling, can in principle recover the full posterior distribution of the image, from which uncertainties can then be quantified. However, traditional high-dimensional sampling methods are generally limited to smooth (e.g. Gaussian) priors and cannot be used with sparsity-promoting priors. Sparse priors, motivated by the theory of compressive sensing, have been shown to be highly effective for radio interferometric imaging. In this article proximal MCMC methods are developed for radio interferometric imaging, leveraging proximal calculus to support non-differential priors, such as sparse priors, in a Bayesian framework. Furthermore, three strategies to quantify uncertainties using the recovered posterior distribution are developed: (i) local (pixel-wise) credible intervals to provide error bars for each individual pixel; (ii) highest posterior density credible regions; and (iii) hypothesis testing of image structure. These forms of uncertainty quantification provide rich information for analysing radio interferometric observations in a statistically robust manner.
[ 0, 1, 0, 1, 0, 0 ]
Title: Fairer and more accurate, but for whom?, Abstract: Complex statistical machine learning models are increasingly being used or considered for use in high-stakes decision-making pipelines in domains such as financial services, health care, criminal justice and human services. These models are often investigated as possible improvements over more classical tools such as regression models or human judgement. While the modeling approach may be new, the practice of using some form of risk assessment to inform decisions is not. When determining whether a new model should be adopted, it is therefore essential to be able to compare the proposed model to the existing approach across a range of task-relevant accuracy and fairness metrics. Looking at overall performance metrics, however, may be misleading. Even when two models have comparable overall performance, they may nevertheless disagree in their classifications on a considerable fraction of cases. In this paper we introduce a model comparison framework for automatically identifying subgroups in which the differences between models are most pronounced. Our primary focus is on identifying subgroups where the models differ in terms of fairness-related quantities such as racial or gender disparities. We present experimental results from a recidivism prediction task and a hypothetical lending example.
[ 1, 0, 0, 1, 0, 0 ]
Title: Multiresolution Tensor Decomposition for Multiple Spatial Passing Networks, Abstract: This article is motivated by soccer positional passing networks collected across multiple games. We refer to these data as replicated spatial passing networks---to accurately model such data it is necessary to take into account the spatial positions of the passer and receiver for each passing event. This spatial registration and replicates that occur across games represent key differences with usual social network data. As a key step before investigating how the passing dynamics influence team performance, we focus on developing methods for summarizing different team's passing strategies. Our proposed approach relies on a novel multiresolution data representation framework and Poisson nonnegative block term decomposition model, which automatically produces coarse-to-fine low-rank network motifs. The proposed methods are applied to detailed passing record data collected from the 2014 FIFA World Cup.
[ 1, 0, 0, 1, 0, 0 ]
Title: Fast-slow asymptotic for semi-analytical ignition criteria in FitzHugh-Nagumo system, Abstract: We study the problem of initiation of excitation waves in the FitzHugh-Nagumo model. Our approach follows earlier works and is based on the idea of approximating the boundary between basins of attraction of propagating waves and of the resting state as the stable manifold of a critical solution. Here, we obtain analytical expressions for the essential ingredients of the theory by singular perturbation using two small parameters, the separation of time scales of the activator and inhibitor, and the threshold in the activator's kinetics. This results in a closed analytical expression for the strength-duration curve.
[ 0, 1, 0, 0, 0, 0 ]
Title: What drives galactic magnetism?, Abstract: We aim to use statistical analysis of a large number of various galaxies to probe, model, and understand relations between different galaxy properties and magnetic fields. We have compiled a sample of 55 galaxies including low-mass dwarf and Magellanic-types, normal spirals and several massive starbursts, and applied principal component analysis (PCA) and regression methods to assess the impact of various galaxy properties on the observed magnetic fields. According to PCA the global galaxy parameters (like HI, H2, and dynamical mass, star formation rate (SFR), near-infrared luminosity, size, and rotational velocity) are all mutually correlated and can be reduced to a single principal component. Further PCA performed for global and intensive (not size related) properties of galaxies (such as gas density, and surface density of the star formation rate, SSFR), indicates that magnetic field strength B is connected mainly to the intensive parameters, while the global parameters have only weak relationships with B. We find that the tightest relationship of B is with SSFR, which is described by a power-law with an index of 0.33+-0.03. The observed weaker associations of B with galaxy dynamical mass and the rotational velocity we interpret as indirect ones, resulting from the observed connection of the global SFR with the available total H2 mass in galaxies. Using our sample we constructed a diagram of B across the Hubble sequence which reveals that high values of B are not restricted by the Hubble type. However, weaker fields appear exclusively in later Hubble types and B as low as about 5muG is not seen among typical spirals. The processes of generation of magnetic field in the dwarf and Magellanic-type galaxies are similar to those in the massive spirals and starbursts and are mainly coupled to local star-formation activity involving the small-scale dynamo mechanism.
[ 0, 1, 0, 0, 0, 0 ]
Title: Regular Separability of One Counter Automata, Abstract: The regular separability problem asks, for two given languages, if there exists a regular language including one of them but disjoint from the other. Our main result is decidability, and PSpace-completeness, of the regular separability problem for languages of one counter automata without zero tests (also known as one counter nets). This contrasts with undecidability of the regularity problem for one counter nets, and with undecidability of the regular separability problem for one counter automata, which is our second result.
[ 1, 0, 0, 0, 0, 0 ]
Title: Fast Incremental SVDD Learning Algorithm with the Gaussian Kernel, Abstract: Support vector data description (SVDD) is a machine learning technique that is used for single-class classification and outlier detection. The idea of SVDD is to find a set of support vectors that defines a boundary around data. When dealing with online or large data, existing batch SVDD methods have to be rerun in each iteration. We propose an incremental learning algorithm for SVDD that uses the Gaussian kernel. This algorithm builds on the observation that all support vectors on the boundary have the same distance to the center of sphere in a higher-dimensional feature space as mapped by the Gaussian kernel function. Each iteration involves only the existing support vectors and the new data point. Moreover, the algorithm is based solely on matrix manipulations; the support vectors and their corresponding Lagrange multiplier $\alpha_i$'s are automatically selected and determined in each iteration. It can be seen that the complexity of our algorithm in each iteration is only $O(k^2)$, where $k$ is the number of support vectors. Experimental results on some real data sets indicate that FISVDD demonstrates significant gains in efficiency with almost no loss in either outlier detection accuracy or objective function value.
[ 0, 0, 0, 1, 0, 0 ]
Title: Comparing distributions by multiple testing across quantiles or CDF values, Abstract: When comparing two distributions, it is often helpful to learn at which quantiles or values there is a statistically significant difference. This provides more information than the binary "reject" or "do not reject" decision of a global goodness-of-fit test. Framing our question as multiple testing across the continuum of quantiles $\tau\in(0,1)$ or values $r\in\mathbb{R}$, we show that the Kolmogorov--Smirnov test (interpreted as a multiple testing procedure) achieves strong control of the familywise error rate. However, its well-known flaw of low sensitivity in the tails remains. We provide an alternative method that retains such strong control of familywise error rate while also having even sensitivity, i.e., equal pointwise type I error rates at each of $n\to\infty$ order statistics across the distribution. Our one-sample method computes instantly, using our new formula that also instantly computes goodness-of-fit $p$-values and uniform confidence bands. To improve power, we also propose stepdown and pre-test procedures that maintain control of the asymptotic familywise error rate. One-sample and two-sample cases are considered, as well as extensions to regression discontinuity designs and conditional distributions. Simulations, empirical examples, and code are provided.
[ 0, 0, 1, 1, 0, 0 ]
Title: Magnetic droplet nucleation with homochiral Neel domain wall, Abstract: We investigate the effect of the Dzyaloshinskii Moriya interaction (DMI) on magnetic domain nucleation in a ferromagnetic thin film with perpendicular magnetic anisotropy. We propose an extended droplet model to determine the nucleation field as a function of the in-plane field. The model can explain the experimentally observed nucleation in a CoNi microstrip with the interfacial DMI. The results are also reproduced by micromagnetic simulation based on the string model. The electrical measurement method proposed in this study can be widely used to quantitatively determine the DMI energy density.
[ 0, 1, 0, 0, 0, 0 ]
Title: Cost-complexity pruning of random forests, Abstract: Random forests perform bootstrap-aggregation by sampling the training samples with replacement. This enables the evaluation of out-of-bag error which serves as a internal cross-validation mechanism. Our motivation lies in using the unsampled training samples to improve each decision tree in the ensemble. We study the effect of using the out-of-bag samples to improve the generalization error first of the decision trees and second the random forest by post-pruning. A preliminary empirical study on four UCI repository datasets show consistent decrease in the size of the forests without considerable loss in accuracy.
[ 1, 0, 0, 1, 0, 0 ]
Title: Chemical abundances of fast-rotating massive stars. I. Description of the methods and individual results, Abstract: Aims: Recent observations have challenged our understanding of rotational mixing in massive stars by revealing a population of fast-rotating objects with apparently normal surface nitrogen abundances. However, several questions have arisen because of a number of issues, which have rendered a reinvestigation necessary; these issues include the presence of numerous upper limits for the nitrogen abundance, unknown multiplicity status, and a mix of stars with different physical properties, such as their mass and evolutionary state, which are known to control the amount of rotational mixing. Methods: We have carefully selected a large sample of bright, fast-rotating early-type stars of our Galaxy (40 objects with spectral types between B0.5 and O4). Their high-quality, high-resolution optical spectra were then analysed with the stellar atmosphere modelling codes DETAIL/SURFACE or CMFGEN, depending on the temperature of the target. Several internal and external checks were performed to validate our methods; notably, we compared our results with literature data for some well-known objects, studied the effect of gravity darkening, or confronted the results provided by the two codes for stars amenable to both analyses. Furthermore, we studied the radial velocities of the stars to assess their binarity. Results: This first part of our study presents our methods and provides the derived stellar parameters, He, CNO abundances, and the multiplicity status of every star of the sample. It is the first time that He and CNO abundances of such a large number of Galactic massive fast rotators are determined in a homogeneous way.
[ 0, 1, 0, 0, 0, 0 ]
Title: Morphology and Motility of Cells on Soft Substrates, Abstract: Recent experiments suggest that the interplay between cells and the mechanics of their substrate gives rise to a diversity of morphological and migrational behaviors. Here, we develop a Cellular Potts Model of polarizing cells on a visco-elastic substrate. We compare our model with experiments on endothelial cells plated on polyacrylamide hydrogels to constrain model parameters and test predictions. Our analysis reveals that morphology and migratory behavior are determined by an intricate interplay between cellular polarization and substrate strain gradients generated by traction forces exerted by cells (self-haptotaxis).
[ 0, 0, 0, 0, 1, 0 ]
Title: A domain-specific language and matrix-free stencil code for investigating electronic properties of Dirac and topological materials, Abstract: We introduce PVSC-DTM (Parallel Vectorized Stencil Code for Dirac and Topological Materials), a library and code generator based on a domain-specific language tailored to implement the specific stencil-like algorithms that can describe Dirac and topological materials such as graphene and topological insulators in a matrix-free way. The generated hybrid-parallel (MPI+OpenMP) code is fully vectorized using Single Instruction Multiple Data (SIMD) extensions. It is significantly faster than matrix-based approaches on the node level and performs in accordance with the roofline model. We demonstrate the chip-level performance and distributed-memory scalability of basic building blocks such as sparse matrix-(multiple-) vector multiplication on modern multicore CPUs. As an application example, we use the PVSC-DTM scheme to (i) explore the scattering of a Dirac wave on an array of gate-defined quantum dots, to (ii) calculate a bunch of interior eigenvalues for strong topological insulators, and to (iii) discuss the photoemission spectra of a disordered Weyl semimetal.
[ 1, 1, 0, 0, 0, 0 ]
Title: A cross-correlation-based estimate of the galaxy luminosity function, Abstract: We extend existing methods for using cross-correlations to derive redshift distributions for photometric galaxies, without using photometric redshifts. The model presented in this paper simultaneously yields highly accurate and unbiased redshift distributions and, for the first time, redshift-dependent luminosity functions, using only clustering information and the apparent magnitudes of the galaxies as input. In contrast to many existing techniques for recovering unbiased redshift distributions, the output of our method is not degenerate with the galaxy bias b(z), which is achieved by modelling the shape of the luminosity bias. We successfully apply our method to a mock galaxy survey and discuss improvements to be made before applying our model to real data.
[ 0, 1, 0, 0, 0, 0 ]