text
stringlengths 138
2.38k
| labels
sequencelengths 6
6
| Predictions
sequencelengths 1
3
|
---|---|---|
Title: $\textsf{S}^3T$: An Efficient Score-Statistic for Spatio-Temporal Surveillance,
Abstract: We present an efficient score statistic, called the $\textsf{S}^3 \textsf{T}$
statistic, to detect the emergence of a spatially and temporally correlated
signal from either fixed-sample or sequential data. The signal may cause a men
shift and/or a change in the covariance structure. The score statistic can
capture both spatial and temporal structures of the change and hence is
particularly powerful in detecting weak signals. The score statistic is
computationally efficient and statistically powerful. Our main theoretical
contribution are accurate analytical approximations on the false alarm rate of
the detection procedures, which can be used to calibrate the threshold
analytically. Numerical experiments on simulated and real data demonstrate the
good performance of our procedure for solar flame detection and water quality
monitoring. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Analysis of dropout learning regarded as ensemble learning,
Abstract: Deep learning is the state-of-the-art in fields such as visual object
recognition and speech recognition. This learning uses a large number of
layers, huge number of units, and connections. Therefore, overfitting is a
serious problem. To avoid this problem, dropout learning is proposed. Dropout
learning neglects some inputs and hidden units in the learning process with a
probability, p, and then, the neglected inputs and hidden units are combined
with the learned network to express the final output. We find that the process
of combining the neglected hidden units with the learned network can be
regarded as ensemble learning, so we analyze dropout learning from this point
of view. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Spatially-resolved Brillouin spectroscopy reveals biomechanical changes in early ectatic corneal disease and post-crosslinking in vivo,
Abstract: Mounting evidence connects the biomechanical properties of tissues to the
development of eye diseases such as keratoconus, a common disease in which the
cornea thins and bulges into a conical shape. However, measuring biomechanical
changes in vivo with sufficient sensitivity for disease detection has proved
challenging. Here, we present a first large-scale study (~200 subjects,
including normal and keratoconus patients) using Brillouin light-scattering
microscopy to measure longitudinal modulus in corneal tissues with high
sensitivity and spatial resolution. Our results in vivo provide evidence of
biomechanical inhomogeneity at the onset of keratoconus and suggest that
biomechanical asymmetry between the left and right eyes may presage disease
development. We additionally measure the stiffening effect of corneal
crosslinking treatment in vivo for the first time. Our results demonstrate the
promise of Brillouin microscopy for diagnosis and treatment of keratoconus, and
potentially other diseases. | [
0,
0,
0,
0,
1,
0
] | [
"Quantitative Biology",
"Physics"
] |
Title: Conjoined constraints on modified gravity from the expansion history and cosmic growth,
Abstract: In this paper we present conjoined constraints on several cosmological models
from the expansion history $H(z)$ and cosmic growth $f\sigma_8(z)$. The models
we study include the CPL $w_0w_a$ parametrization, the Holographic Dark Energy
(HDE) model, the Time varying vacuum ($\Lambda_t$CDM) model, the Dvali,
Gabadadze and Porrati (DGP) and Finsler-Randers (FRDE) models, a power law
$f(T)$ model and finally the Hu-Sawicki $f(R)$ model. In all cases we perform a
simultaneous fit to the SnIa, CMB, BAO, $H(z)$ and growth data, while also
following the conjoined visualization of $H(z)$ and $f\sigma_8(z)$ as in Linder
(2017). Furthermore, we introduce the Figure of Merit (FoM) in the
$H(z)-f\sigma_8(z)$ parameter space as a way to constrain models that jointly
fit both probes well. We use both the latest $H(z)$ and $f\sigma_8(z)$ data,
but also LSST-like mocks with $1\%$ measurements and we find that the conjoined
method of constraining the expansion history and cosmic growth simultaneously
is able not only to place stringent constraints on these parameters but also to
provide an easy visual way to discriminate cosmological models. Finally, we
confirm the existence of a tension between the growth rate and Planck CMB data
and we find that the FoM in the conjoined parameter space of
$H(z)-f\sigma_8(z)$ can be used to discriminate between the $\Lambda$CDM model
and certain classes of modified gravity models, namely the DGP and $f(T)$. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Translations: generalizing relative expressiveness between logics,
Abstract: There is a strong demand for precise means for the comparison of logics in
terms of expressiveness both from theoretical and from application areas. The
aim of this paper is to propose a sufficiently general and reasonable formal
criterion for expressiveness, so as to apply not only to model-theoretic
logics, but also to Tarskian and proof-theoretic logics. For model-theoretic
logics there is a standard framework of relative expressiveness, based on the
capacity of characterizing structures, and a straightforward formal criterion
issuing from it. The problem is that it only allows the comparison of those
logics defined within the same class of models. The urge for a broader
framework of expressiveness is not new. Nevertheless, the enterprise is complex
and a reasonable model-theoretic formal criterion is still wanting. Recently
there appeared two criteria in this wider framework, one from García-Matos &
Väänänen and other from L. Kuijer. We argue that they are not adequate.
Their limitations are analyzed and we propose to move to an even broader
framework lacking model-theoretic notions, which we call "translational
expressiveness". There is already a criterion in this later framework by
Mossakowski et al., however it turned out to be too lax. We propose some
adequacy criteria for expressiveness and a formal criterion of translational
expressiveness complying with them is given. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Absence of long range order in the frustrated magnet SrDy$_2$O$_4$ due to trapped defects from a dimensionality crossover,
Abstract: Magnetic frustration and low dimensionality can prevent long range magnetic
order and lead to exotic correlated ground states. SrDy$_2$O$_4$ consists of
magnetic Dy$^{3+}$ ions forming magnetically frustrated zig-zag chains along
the c-axis and shows no long range order to temperatures as low as $T=60$ mK.
We carried out neutron scattering and AC magnetic susceptibility measurements
using powder and single crystals of SrDy$_2$O$_4$. Diffuse neutron scattering
indicates strong one-dimensional (1D) magnetic correlations along the chain
direction that can be qualitatively accounted for by the axial next-nearest
neighbour Ising (ANNNI) model with nearest-neighbor and next-nearest-neighbor
exchange $J_1=0.3$ meV and $J_2=0.2$ meV, respectively. Three-dimensional (3D)
correlations become important below $T^*\approx0.7$ K. At $T=60$ mK, the short
range correlations are characterized by a putative propagation vector
$\textbf{k}_{1/2}=(0,\frac{1}{2},\frac{1}{2})$. We argue that the absence of
long range order arises from the presence of slowly decaying 1D domain walls
that are trapped due to 3D correlations. This stabilizes a low-temperature
phase without long range magnetic order, but with well-ordered chain segments
separated by slowly-moving domain walls. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: A class of multi-resolution approximations for large spatial datasets,
Abstract: Gaussian processes are popular and flexible models for spatial, temporal, and
functional data, but they are computationally infeasible for large datasets. We
discuss Gaussian-process approximations that use basis functions at multiple
resolutions to achieve fast inference and that can (approximately) represent
any spatial covariance structure. We consider two special cases of this
multi-resolution-approximation framework, a taper version and a
domain-partitioning (block) version. We describe theoretical properties and
inference procedures, and study the computational complexity of the methods.
Numerical comparisons and an application to satellite data are also provided. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Emergence of Selective Invariance in Hierarchical Feed Forward Networks,
Abstract: Many theories have emerged which investigate how in- variance is generated in
hierarchical networks through sim- ple schemes such as max and mean pooling.
The restriction to max/mean pooling in theoretical and empirical studies has
diverted attention away from a more general way of generating invariance to
nuisance transformations. We con- jecture that hierarchically building
selective invariance (i.e. carefully choosing the range of the transformation
to be in- variant to at each layer of a hierarchical network) is im- portant
for pattern recognition. We utilize a novel pooling layer called adaptive
pooling to find linear pooling weights within networks. These networks with the
learnt pooling weights have performances on object categorization tasks that
are comparable to max/mean pooling networks. In- terestingly, adaptive pooling
can converge to mean pooling (when initialized with random pooling weights),
find more general linear pooling schemes or even decide not to pool at all. We
illustrate the general notion of selective invari- ance through object
categorization experiments on large- scale datasets such as SVHN and ILSVRC
2012. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: A Bayesian Estimation for the Fractional Order of the Differential Equation that Models Transport in Unconventional Hydrocarbon Reservoirs,
Abstract: The extraction of natural gas from the earth has been shown to be governed by
differential equations concerning flow through a porous material. Recently,
models such as fractional differential equations have been developed to model
this phenomenon. One key issue with these models is estimating the fraction of
the differential equation. Traditional methods such as maximum likelihood,
least squares and even method of moments are not available to estimate this
parameter as traditional calculus methods do not apply. We develop a Bayesian
approach to estimate the fraction of the order of the differential equation
that models transport in unconventional hydrocarbon reservoirs. In this paper,
we use this approach to adequately quantify the uncertainties associated with
the error and predictions. A simulation study is presented as well to assess
the utility of the modeling approach. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Numerical simulations of magnetic billiards in a convex domain in $\mathbb{R}^2$,
Abstract: We present numerical simulations of magnetic billiards inside a convex domain
in the plane. | [
0,
1,
1,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Supercharacters and the discrete Fourier, cosine, and sine transforms,
Abstract: Using supercharacter theory, we identify the matrices that are diagonalized
by the discrete cosine and discrete sine transforms, respectively. Our method
affords a combinatorial interpretation for the matrix entries. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Time Complexity of Constraint Satisfaction via Universal Algebra,
Abstract: The exponential-time hypothesis (ETH) states that 3-SAT is not solvable in
subexponential time, i.e. not solvable in O(c^n) time for arbitrary c > 1,
where n denotes the number of variables. Problems like k-SAT can be viewed as
special cases of the constraint satisfaction problem (CSP), which is the
problem of determining whether a set of constraints is satisfiable. In this
paper we study thef worst-case time complexity of NP-complete CSPs. Our main
interest is in the CSP problem parameterized by a constraint language Gamma
(CSP(Gamma)), and how the choice of Gamma affects the time complexity. It is
believed that CSP(Gamma) is either tractable or NP-complete, and the algebraic
CSP dichotomy conjecture gives a sharp delineation of these two classes based
on algebraic properties of constraint languages. Under this conjecture and the
ETH, we first rule out the existence of subexponential algorithms for
finite-domain NP-complete CSP(Gamma) problems. This result also extends to
certain infinite-domain CSPs and structurally restricted CSP(Gamma) problems.
We then begin a study of the complexity of NP-complete CSPs where one is
allowed to arbitrarily restrict the values of individual variables, which is a
very well-studied subclass of CSPs. For such CSPs with finite domain D, we
identify a relation SD such that (1) CSP({SD}) is NP-complete and (2) if
CSP(Gamma) over D is NP-complete and solvable in O(c^n) time, then CSP({SD}) is
solvable in O(c^n) time, too. Hence, the time complexity of CSP({SD}) is a
lower bound for all CSPs of this particular kind. We also prove that the
complexity of CSP({SD}) is decreasing when |D| increases, unless the ETH is
false. This implies, for instance, that for every c>1 there exists a
finite-domain Gamma such that CSP(Gamma) is NP-complete and solvable in O(c^n)
time. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Magnetic behavior of new compounds, Gd3RuSn6 and Tb3RuSn6,
Abstract: We report temperature (T) dependence of dc magnetization, electrical
resistivity (rho(T)), and heat-capacity of rare-earth (R) compounds, Gd3RuSn6
and Tb3RuSn6, which are found to crystallize in the Yb3CoSn6-type orthorhombic
structure (space group: Cmcm). The results establish that there is an onset of
antiferromagnetic order near (T_N) 19 and 25 K respectively. In addition, we
find that there is another magnetic transition for both the cases around 14 and
17 K respectively. In the case of the Gd compound, the spin-scattering
contribution to rho is found to increase below 75 K as the material is cooled
towards T_N, thereby resulting in a minimum in the plot of rho(T) unexpected
for Gd based systems. Isothermal magnetization at 1.8 K reveals an upward
curvature around 50 kOe. Isothermal magnetoresistance plots show interesting
anomalies in the magnetically ordered state. There are sign reversals in the
plot of isothermal entropy change versus T in the magnetically ordered state,
indicating subtle changes in the spin reorientation with T. The results reveal
that these compounds exhibit interesting magnetic properties. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Ergodic Exploration of Distributed Information,
Abstract: This paper presents an active search trajectory synthesis technique for
autonomous mobile robots with nonlinear measurements and dynamics. The
presented approach uses the ergodicity of a planned trajectory with respect to
an expected information density map to close the loop during search. The
ergodic control algorithm does not rely on discretization of the search or
action spaces, and is well posed for coverage with respect to the expected
information density whether the information is diffuse or localized, thus
trading off between exploration and exploitation in a single objective
function. As a demonstration, we use a robotic electrolocation platform to
estimate location and size parameters describing static targets in an
underwater environment. Our results demonstrate that the ergodic exploration of
distributed information (EEDI) algorithm outperforms commonly used
information-oriented controllers, particularly when distractions are present. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Towards Visual Explanations for Convolutional Neural Networks via Input Resampling,
Abstract: The predictive power of neural networks often costs model interpretability.
Several techniques have been developed for explaining model outputs in terms of
input features; however, it is difficult to translate such interpretations into
actionable insight. Here, we propose a framework to analyze predictions in
terms of the model's internal features by inspecting information flow through
the network. Given a trained network and a test image, we select neurons by two
metrics, both measured over a set of images created by perturbations to the
input image: (1) magnitude of the correlation between the neuron activation and
the network output and (2) precision of the neuron activation. We show that the
former metric selects neurons that exert large influence over the network
output while the latter metric selects neurons that activate on generalizable
features. By comparing the sets of neurons selected by these two metrics, our
framework suggests a way to investigate the internal attention mechanisms of
convolutional neural networks. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Two classes of number fields with a non-principal Euclidean ideal,
Abstract: This paper introduces two classes of totally real quartic number fields, one
of biquadratic extensions and one of cyclic extensions, each of which has a
non-principal Euclidean ideal. It generalizes techniques of Graves used to
prove that the number field $\mathbb{Q}(\sqrt{2},\sqrt{35})$ has a
non-principal Euclidean ideal. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Finding Crash-Consistency Bugs with Bounded Black-Box Crash Testing,
Abstract: We present a new approach to testing file-system crash consistency: bounded
black-box crash testing (B3). B3 tests the file system in a black-box manner
using workloads of file-system operations. Since the space of possible
workloads is infinite, B3 bounds this space based on parameters such as the
number of file-system operations or which operations to include, and
exhaustively generates workloads within this bounded space. Each workload is
tested on the target file system by simulating power-loss crashes while the
workload is being executed, and checking if the file system recovers to a
correct state after each crash. B3 builds upon insights derived from our study
of crash-consistency bugs reported in Linux file systems in the last five
years. We observed that most reported bugs can be reproduced using small
workloads of three or fewer file-system operations on a newly-created file
system, and that all reported bugs result from crashes after fsync() related
system calls. We build two tools, CrashMonkey and ACE, to demonstrate the
effectiveness of this approach. Our tools are able to find 24 out of the 26
crash-consistency bugs reported in the last five years. Our tools also revealed
10 new crash-consistency bugs in widely-used, mature Linux file systems, seven
of which existed in the kernel since 2014. Our tools also found a
crash-consistency bug in a verified file system, FSCQ. The new bugs result in
severe consequences like broken rename atomicity and loss of persisted files. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Nanopteron solutions of diatomic Fermi-Pasta-Ulam-Tsingou lattices with small mass-ratio,
Abstract: Consider an infinite chain of masses, each connected to its nearest neighbors
by a (nonlinear) spring. This is a Fermi-Pasta-Ulam-Tsingou lattice. We prove
the existence of traveling waves in the setting where the masses alternate in
size. In particular we address the limit where the mass ratio tends to zero.
The problem is inherently singular and we find that the traveling waves are not
true solitary waves but rather "nanopterons", which is to say, waves which
asymptotic at spatial infinity to very small amplitude periodic waves.
Moreover, we can only find solutions when the mass ratio lies in a certain open
set. The difficulties in the problem all revolve around understanding Jost
solutions of a nonlocal Schrödinger operator in its semi-classical limit. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Steering Orbital Optimization out of Local Minima and Saddle Points Toward Lower Energy,
Abstract: The general procedure underlying Hartree-Fock and Kohn-Sham density
functional theory calculations consists in optimizing orbitals for a
self-consistent solution of the Roothaan-Hall equations in an iterative
process. It is often ignored that multiple self-consistent solutions can exist,
several of which may correspond to minima of the energy functional. In addition
to the difficulty sometimes encountered to converge the calculation to a
self-consistent solution, one must ensure that the correct self-consistent
solution was found, typically the one with the lowest electronic energy.
Convergence to an unwanted solution is in general not trivial to detect and
will deliver incorrect energy and molecular properties, and accordingly a
misleading description of chemical reactivity. Wrong conclusions based on
incorrect self-consistent field convergence are particularly cumbersome in
automated calculations met in high-throughput virtual screening, structure
optimizations, ab initio molecular dynamics, and in real-time explorations of
chemical reactivity, where the vast amount of data can hardly be manually
inspected. Here, we introduce a fast and automated approach to detect and cure
incorrect orbital convergence, which is especially suited for electronic
structure calculations on sequences of molecular structures. Our approach
consists of a randomized perturbation of the converged electron density
(matrix) intended to push orbital convergence to solutions that correspond to
another stationary point (of potentially lower electronic energy) in the
variational parameter space of an electronic wave function approximation. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Chemistry"
] |
Title: CMS-HF Calorimeter Upgrade for Run II,
Abstract: CMS-HF Calorimeters have been undergoing a major upgrade for the last couple
of years to alleviate the problems encountered during Run I, especially in the
PMT and the readout systems. In this poster, the problems caused by the old
PMTs installed in the detectors and their solutions will be explained.
Initially, regular PMTs with thicker windows, causing large Cherenkov
radiation, were used. Instead of the light coming through the fibers from the
detector, stray muons passing through the PMT itself produce Cherenkov
radiation in the PMT window, resulting in erroneously large signals. Usually,
large signals are the result of very high-energy particles in the calorimeter
and are tagged as important. As a result, these so-called window events
generate false triggers. Four-anode PMTs with thinner windows were selected to
reduce these window events. Additional channels also help eliminate such
remaining events through algorithms comparing the output of different PMT
channels. During the EYETS 16/17 period in the LHC operations, the final
components of the modifications to the readout system, namely the two-channel
front-end electronics cards, are installed. Complete upgrade of the HF
Calorimeter, including the preparations for the Run II will be discussed in
this poster, with possible effects on the eventual data taking. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Adapting Engineering Education to Industrie 4.0 Vision,
Abstract: Industrie 4.0 is originally a future vision described in the high-tech
strategy of the German government that is conceived upon the information and
communication technologies like Cyber-Physical Systems, Internet of Things,
Physical Internet and Internet of Services to achieve a high degree of
flexibility in production, higher productivity rates through real-time
monitoring and diagnosis, and a lower wastage rate of material in production.
An important part of the tasks in the preparation for Industrie 4.0 is the
adaption of the higher education to the requirements of this vision, in
particular the engineering education. In this work, we introduce a road map
consisting of three pillars describing the changes/enhancements to be conducted
in the areas of curriculum development, lab concept, and student club
activities. We also report our current application of this road map at the
Turkish-German University, Istanbul. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Orthogonal Statistical Learning,
Abstract: We provide excess risk guarantees for statistical learning in the presence of
an unknown nuisance component. We analyze a two-stage sample splitting
meta-algorithm that takes as input two arbitrary estimation algorithms: one for
the target model and one for the nuisance model. We show that if the population
risk satisfies a condition called Neyman orthogonality, the impact of the first
stage error on the excess risk bound achieved by the meta-algorithm is of
second order. Our general theorem is agnostic to the particular algorithms used
for the target and nuisance and only makes an assumption on their individual
performance. This enables the use of a plethora of existing results from
statistical learning and machine learning literature to give new guarantees for
learning with a nuisance component. Moreover, by focusing on excess risk rather
than parameter estimation, we can give guarantees under weaker assumptions than
in previous works and accommodate the case where the target parameter belongs
to a complex nonparametric class. When the nuisance and target parameters
belong to arbitrary classes, we characterize conditions on the metric entropy
such that oracle rates---rates of the same order as if we knew the nuisance
model---are achieved. We also analyze the rates achieved by specific estimation
algorithms such as variance-penalized empirical risk minimization, neural
network estimation and sparse high-dimensional linear model estimation. We
highlight the applicability of our results via four applications of primary
importance: 1) heterogeneous treatment effect estimation, 2) offline policy
optimization, 3) domain adaptation, and 4) learning with missing data. | [
1,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Computer Science"
] |
Title: A note on conditional versus joint unconditional weak convergence in bootstrap consistency results,
Abstract: The consistency of a bootstrap or resampling scheme is classically validated
by weak convergence of conditional laws. However, when working with stochastic
processes in the space of bounded functions and their weak convergence in the
Hoffmann-J{\o}rgensen sense, an obstacle occurs: due to possible
non-measurability, neither laws nor conditional laws are well-defined. Starting
from an equivalent formulation of weak convergence based on the bounded
Lipschitz metric, a classical circumvent is to formulate bootstrap consistency
in terms of the latter distance between what might be called a
\emph{conditional law} of the (non-measurable) bootstrap process and the law of
the limiting process. The main contribution of this note is to provide an
equivalent formulation of bootstrap consistency in the space of bounded
functions which is more intuitive and easy to work with. Essentially, the
equivalent formulation consists of (unconditional) weak convergence of the
original process jointly with two bootstrap replicates. As a by-product, we
provide two equivalent formulations of bootstrap consistency for statistics
taking values in separable metric spaces: the first in terms of (unconditional)
weak convergence of the statistic jointly with its bootstrap replicates, the
second in terms of convergence in probability of the empirical distribution
function of the bootstrap replicates. Finally, the asymptotic validity of
bootstrap-based confidence intervals and tests is briefly revisited, with
particular emphasis on the, in practice unavoidable, Monte Carlo approximation
of conditional quantiles. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Flag representations of mixed volumes and mixed functionals of convex bodies,
Abstract: Mixed volumes $V(K_1,\dots, K_d)$ of convex bodies $K_1,\dots ,K_d$ in
Euclidean space $\mathbb{R}^d$ are of central importance in the Brunn-Minkowski
theory. Representations for mixed volumes are available in special cases, for
example as integrals over the unit sphere with respect to mixed area measures.
More generally, in Hug-Rataj-Weil (2013) a formula for $V(K [n], M[d-n])$,
$n\in \{1,\dots ,d-1\}$, as a double integral over flag manifolds was
established which involved certain flag measures of the convex bodies $K$ and
$M$ (and required a general position of the bodies). In the following, we
discuss the general case $V(K_1[n_1],\dots , K_k[n_k])$, $n_1+\cdots +n_k=d$,
and show a corresponding result involving the flag measures
$\Omega_{n_1}(K_1;\cdot),\dots, \Omega_{n_k}(K_k;\cdot)$. For this purpose, we
first establish a curvature representation of mixed volumes over the normal
bundles of the bodies involved.
We also obtain a corresponding flag representation for the mixed functionals
from translative integral geometry and a local version, for mixed (translative)
curvature measures. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Systematic Quantum Mechanical Region Determination in QM/MM Simulation,
Abstract: Hybrid quantum mechanical-molecular mechanical (QM/MM) simulations are widely
used in enzyme simulation. Over ten convergence studies of QM/MM methods have
revealed over the past several years that key energetic and structural
properties approach asymptotic limits with only very large (ca. 500-1000 atom)
QM regions. This slow convergence has been observed to be due in part to
significant charge transfer between the core active site and surrounding
protein environment, which cannot be addressed by improvement of MM force
fields or the embedding method employed within QM/MM. Given this slow
convergence, it becomes essential to identify strategies for the most
atom-economical determination of optimal QM regions and to gain insight into
the crucial interactions captured only in large QM regions. Here, we extend and
develop two methods for quantitative determination of QM regions. First, in the
charge shift analysis (CSA) method, we probe the reorganization of electron
density when core active site residues are removed completely, as determined by
large-QM region QM/MM calculations. Second, we introduce the
highly-parallelizable Fukui shift analysis (FSA), which identifies how
core/substrate frontier states are altered by the presence of an additional QM
residue on smaller initial QM regions. We demonstrate that the FSA and CSA
approaches are complementary and consistent on three test case enzymes:
catechol O-methyltransferase, cytochrome P450cam, and hen eggwhite lysozyme. We
also introduce validation strategies and test sensitivities of the two methods
to geometric structure, basis set size, and electronic structure methodology.
Both methods represent promising approaches for the systematic, unbiased
determination of quantum mechanical effects in enzymes and large systems that
necessitate multi-scale modeling. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Chemistry"
] |
Title: Look Mum, no VM Exits! (Almost),
Abstract: Multi-core CPUs are a standard component in many modern embedded systems.
Their virtualisation extensions enable the isolation of services, and gain
popularity to implement mixed-criticality or otherwise split systems. We
present Jailhouse, a Linux-based, OS-agnostic partitioning hypervisor that uses
novel architectural approaches to combine Linux, a powerful general-purpose
system, with strictly isolated special-purpose components. Our design goals
favour simplicity over features, establish a minimal code base, and minimise
hypervisor activity.
Direct assignment of hardware to guests, together with a deferred
initialisation scheme, offloads any complex hardware handling and bootstrapping
issues from the hypervisor to the general purpose OS. The hypervisor
establishes isolated domains that directly access physical resources without
the need for emulation or paravirtualisation. This retains, with negligible
system overhead, Linux's feature-richness in uncritical parts, while frugal
safety and real-time critical workloads execute in isolated, safe domains. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Harnessing Structures in Big Data via Guaranteed Low-Rank Matrix Estimation,
Abstract: Low-rank modeling plays a pivotal role in signal processing and machine
learning, with applications ranging from collaborative filtering, video
surveillance, medical imaging, to dimensionality reduction and adaptive
filtering. Many modern high-dimensional data and interactions thereof can be
modeled as lying approximately in a low-dimensional subspace or manifold,
possibly with additional structures, and its proper exploitations lead to
significant reduction of costs in sensing, computation and storage. In recent
years, there is a plethora of progress in understanding how to exploit low-rank
structures using computationally efficient procedures in a provable manner,
including both convex and nonconvex approaches. On one side, convex relaxations
such as nuclear norm minimization often lead to statistically optimal
procedures for estimating low-rank matrices, where first-order methods are
developed to address the computational challenges; on the other side, there is
emerging evidence that properly designed nonconvex procedures, such as
projected gradient descent, often provide globally optimal solutions with a
much lower computational cost in many problems. This survey article will
provide a unified overview of these recent advances on low-rank matrix
estimation from incomplete measurements. Attention is paid to rigorous
characterization of the performance of these algorithms, and to problems where
the low-rank matrix have additional structural properties that require new
algorithmic designs and theoretical analysis. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics",
"Statistics"
] |
Title: Nano-jet Related to Bessel Beams and to Super-Resolutions in Micro-sphere Optical Experiments,
Abstract: The appearance of a Nano-jet in the micro-sphere optical experiments is
analyzed by relating this effect to non-diffracting Bessel beams. By inserting
a circular aperture with a radius which is in the order of subwavelength in the
EM waist, and sending the transmitted light into a confocal microscope, EM
fluctuations by the different Bessel beams are avoided. On this constant EM
field evanescent waves are superposed. While this effect improves the
optical-depth of the imaging process, the object fine-structures are obtained,
from the modulation of the EM fields by the evanescent waves. The use of a
combination of the micro-sphere optical system with an interferometer for phase
contrast measurements is described. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: English-Japanese Neural Machine Translation with Encoder-Decoder-Reconstructor,
Abstract: Neural machine translation (NMT) has recently become popular in the field of
machine translation. However, NMT suffers from the problem of repeating or
missing words in the translation. To address this problem, Tu et al. (2017)
proposed an encoder-decoder-reconstructor framework for NMT using
back-translation. In this method, they selected the best forward translation
model in the same manner as Bahdanau et al. (2015), and then trained a
bi-directional translation model as fine-tuning. Their experiments show that it
offers significant improvement in BLEU scores in Chinese-English translation
task. We confirm that our re-implementation also shows the same tendency and
alleviates the problem of repeating and missing words in the translation on a
English-Japanese task too. In addition, we evaluate the effectiveness of
pre-training by comparing it with a jointly-trained model of forward
translation and back-translation. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Finding Bottlenecks: Predicting Student Attrition with Unsupervised Classifier,
Abstract: With pressure to increase graduation rates and reduce time to degree in
higher education, it is important to identify at-risk students early. Automated
early warning systems are therefore highly desirable. In this paper, we use
unsupervised clustering techniques to predict the graduation status of declared
majors in five departments at California State University Northridge (CSUN),
based on a minimal number of lower division courses in each major. In addition,
we use the detected clusters to identify hidden bottleneck courses. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: SAVITR: A System for Real-time Location Extraction from Microblogs during Emergencies,
Abstract: We present SAVITR, a system that leverages the information posted on the
Twitter microblogging site to monitor and analyse emergency situations. Given
that only a very small percentage of microblogs are geo-tagged, it is essential
for such a system to extract locations from the text of the microblogs. We
employ natural language processing techniques to infer the locations mentioned
in the microblog text, in an unsupervised fashion and display it on a map-based
interface. The system is designed for efficient performance, achieving an
F-score of 0.79, and is approximately two orders of magnitude faster than other
available tools for location extraction. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Tidal tails around the outer halo globular clusters Eridanus and Palomar 15,
Abstract: We report the discovery of tidal tails around the two outer halo globular
clusters, Eridanus and Palomar 15, based on $gi$-band images obtained with
DECam at the CTIO 4-m Blanco Telescope. The tidal tails are among the most
remote stellar streams presently known in the Milky Way halo. Cluster members
have been determined from the color-magnitude diagrams and used to establish
the radial density profiles, which show, in both cases, a strong departure in
the outer regions from the best-fit King profile. Spatial density maps reveal
tidal tails stretching out on opposite sides of both clusters, extending over a
length of $\sim$760 pc for Eridanus and $\sim$1160 pc for Palomar 15. The great
circle projected from the Palomar 15 tidal tails encompasses the Galactic
Center, while that for Eridanus passes close to four dwarf satellite galaxies,
one of which (Sculptor) is at a comparable distance to that of Eridanus. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Methods for Estimation of Convex Sets,
Abstract: In the framework of shape constrained estimation, we review methods and works
done in convex set estimation. These methods mostly build on stochastic and
convex geometry, empirical process theory, functional analysis, linear
programming, extreme value theory, etc. The statistical problems that we review
include density support estimation, estimation of the level sets of densities
or depth functions, nonparametric regression, etc. We focus on the estimation
of convex sets under the Nikodym and Hausdorff metrics, which require different
techniques and, quite surprisingly, lead to very different results, in
particular in density support estimation. Finally, we discuss computational
issues in high dimensions. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: An Invitation to Polynomiography via Exponential Series,
Abstract: The subject of Polynomiography deals with algorithmic visualization of
polynomial equations, having many applications in STEM and art, see [Kal04].
Here we consider the polynomiography of the partial sums of the exponential
series. While the exponential function is taught in standard calculus courses,
it is unlikely that properties of zeros of its partial sums are considered in
such courses, let alone their visualization as science or art. The Monthly
article Zemyan discusses some mathematical properties of these zeros. Here we
exhibit some fractal and non-fractal polynomiographs of the partial sums while
also presenting a brief introduction of the underlying concepts.
Polynomiography establishes a different kind of appreciation of the
significance of polynomials in STEM, as well as in art. It helps in the
teaching of various topics at diverse levels. It also leads to new discoveries
on polynomials and inspires new applications. We also present a link for the
educator to get access to a demo polynomiography software together with a
module that helps teach basic topics to middle and high school students, as
well as undergraduates. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Comparison moduli spaces of Riemann surfaces,
Abstract: We define a kind of moduli space of nested surfaces and mappings, which we
call a comparison moduli space. We review examples of such spaces in geometric
function theory and modern Teichmueller theory, and illustrate how a wide range
of phenomena in complex analysis are captured by this notion of moduli space.
The paper includes a list of open problems in classical and modern function
theory and Teichmueller theory ranging from general theoretical questions to
specific technical problems. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: The Trace Criterion for Kernel Bandwidth Selection for Support Vector Data Description,
Abstract: Support vector data description (SVDD) is a popular anomaly detection
technique. The SVDD classifier partitions the whole data space into an
$\textit{inlier}$ region, which consists of the region $\textit{near}$ the
training data, and an $\textit{outlier}$ region, which consists of points
$\textit{away}$ from the training data. The computation of the SVDD classifier
requires a kernel function, for which the Gaussian kernel is a common choice.
The Gaussian kernel has a bandwidth parameter, and it is important to set the
value of this parameter correctly for good results. A small bandwidth leads to
overfitting such that the resulting SVDD classifier overestimates the number of
anomalies, whereas a large bandwidth leads to underfitting and an inability to
detect many anomalies. In this paper, we present a new unsupervised method for
selecting the Gaussian kernel bandwidth. Our method, which exploits the
low-rank representation of the kernel matrix to suggest a kernel bandwidth
value, is competitive with existing bandwidth selection methods. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Results of measurements of the flux of albedo muons with NEVOD-DECOR experimental complex,
Abstract: Results of investigations of the near-horizontal muons in the range of zenith
angles of 85-95 degrees are presented. In this range, so-called "albedo" muons
(atmospheric muons scattered in the ground into the upper hemisphere) are
detected. Albedo muons are one of the main sources of the background in
neutrino experiments. Experimental data of two series of measurements conducted
at the experimental complex NEVOD-DECOR with the duration of about 30 thousand
hours "live" time are analyzed. The results of measurements of the muon flux
intensity are compared with simulation results using Monte-Carlo on the basis
of two multiple Coulomb scattering models: model of point-like nuclei and model
taking into account finite size of nuclei. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: General multilevel Monte Carlo methods for pricing discretely monitored Asian options,
Abstract: We describe general multilevel Monte Carlo methods that estimate the price of
an Asian option monitored at $m$ fixed dates. Our approach yields unbiased
estimators with standard deviation $O(\epsilon)$ in $O(m + (1/\epsilon)^{2})$
expected time for a variety of processes including the Black-Scholes model,
Merton's jump-diffusion model, the Square-Root diffusion model, Kou's double
exponential jump-diffusion model, the variance gamma and NIG exponential Levy
processes and, via the Milstein scheme, processes driven by scalar stochastic
differential equations. Using the Euler scheme, our approach estimates the
Asian option price with root mean square error $O(\epsilon)$ in
$O(m+(\ln(\epsilon)/\epsilon)^{2})$ expected time for processes driven by
multidimensional stochastic differential equations. Numerical experiments
confirm that our approach outperforms the conventional Monte Carlo method by a
factor of order $m$. | [
0,
0,
0,
0,
0,
1
] | [
"Mathematics",
"Quantitative Finance",
"Statistics"
] |
Title: Discovery of Complex Anomalous Patterns of Sexual Violence in El Salvador,
Abstract: When sexual violence is a product of organized crime or social imaginary, the
links between sexual violence episodes can be understood as a latent structure.
With this assumption in place, we can use data science to uncover complex
patterns. In this paper we focus on the use of data mining techniques to unveil
complex anomalous spatiotemporal patterns of sexual violence. We illustrate
their use by analyzing all reported rapes in El Salvador over a period of nine
years. Through our analysis, we are able to provide evidence of phenomena that,
to the best of our knowledge, have not been previously reported in literature.
We devote special attention to a pattern we discover in the East, where
underage victims report their boyfriends as perpetrators at anomalously high
rates. Finally, we explain how such analyzes could be conducted in real-time,
enabling early detection of emerging patterns to allow law enforcement agencies
and policy makers to react accordingly. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Quantitative Biology"
] |
Title: Using Social Network Information in Bayesian Truth Discovery,
Abstract: We investigate the problem of truth discovery based on opinions from multiple
agents who may be unreliable or biased. We consider the case where agents'
reliabilities or biases are correlated if they belong to the same community,
which defines a group of agents with similar opinions regarding a particular
event. An agent can belong to different communities for different events, and
these communities are unknown a priori. We incorporate knowledge of the agents'
social network in our truth discovery framework and develop Laplace variational
inference methods to estimate agents' reliabilities, communities, and the event
states. We also develop a stochastic variational inference method to scale our
model to large social networks. Simulations and experiments on real data
suggest that when observations are sparse, our proposed methods perform better
than several other inference methods, including majority voting, TruthFinder,
AccuSim, the Confidence-Aware Truth Discovery method, the Bayesian Classifier
Combination (BCC) method, and the Community BCC method. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: IVOA Recommendation: HiPS - Hierarchical Progressive Survey,
Abstract: This document presents HiPS, a hierarchical scheme for the description,
storage and access of sky survey data. The system is based on hierarchical
tiling of sky regions at finer and finer spatial resolution which facilitates a
progressive view of a survey, and supports multi-resolution zooming and
panning. HiPS uses the HEALPix tessellation of the sky as the basis for the
scheme and is implemented as a simple file structure with a direct indexing
scheme that leads to practical implementations. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Computer Science"
] |
Title: Deep Structured Learning for Facial Action Unit Intensity Estimation,
Abstract: We consider the task of automated estimation of facial expression intensity.
This involves estimation of multiple output variables (facial action units ---
AUs) that are structurally dependent. Their structure arises from statistically
induced co-occurrence patterns of AU intensity levels. Modeling this structure
is critical for improving the estimation performance; however, this performance
is bounded by the quality of the input features extracted from face images. The
goal of this paper is to model these structures and estimate complex feature
representations simultaneously by combining conditional random field (CRF)
encoded AU dependencies with deep learning. To this end, we propose a novel
Copula CNN deep learning approach for modeling multivariate ordinal variables.
Our model accounts for $ordinal$ structure in output variables and their
$non$-$linear$ dependencies via copula functions modeled as cliques of a CRF.
These are jointly optimized with deep CNN feature encoding layers using a newly
introduced balanced batch iterative training algorithm. We demonstrate the
effectiveness of our approach on the task of AU intensity estimation on two
benchmark datasets. We show that joint learning of the deep features and the
target output structure results in significant performance gains compared to
existing deep structured models for analysis of facial expressions. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Placing the spotted T Tauri star LkCa 4 on an HR diagram,
Abstract: Ages and masses of young stars are often estimated by comparing their
luminosities and effective temperatures to pre-main sequence stellar evolution
tracks, but magnetic fields and starspots complicate both the observations and
evolution. To understand their influence, we study the heavily-spotted
weak-lined T-Tauri star LkCa 4 by searching for spectral signatures of
radiation originating from the starspot or starspot groups. We introduce a new
methodology for constraining both the starspot filling factor and the spot
temperature by fitting two-temperature stellar atmosphere models constructed
from Phoenix synthetic spectra to a high-resolution near-IR IGRINS spectrum.
Clearly discernable spectral features arise from both a hot photospheric
component $T_{\mathrm{hot}} \sim4100$ K and to a cool component
$T_{\mathrm{cool}} \sim2700-3000$ K, which covers $\sim80\%$ of the visible
surface. This mix of hot and cool emission is supported by analyses of the
spectral energy distribution, rotational modulation of colors and of TiO band
strengths, and features in low-resolution optical/near-IR spectroscopy.
Although the revised effective temperature and luminosity make LkCa 4 appear
much younger and lower mass than previous estimates from unspotted stellar
evolution models, appropriate estimates will require the production and
adoption of spotted evolutionary models. Biases from starspots likely afflict
most fully convective young stars and contribute to uncertainties in ages and
age spreads of open clusters. In some spectral regions starspots act as a
featureless veiling continuum owing to high rotational broadening and heavy
line-blanketing in cool star spectra. Some evidence is also found for an
anti-correlation between the velocities of the warm and cool components. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Global solutions to reaction-diffusion equations with super-linear drift and multiplicative noise,
Abstract: Let $\xi(t\,,x)$ denote space-time white noise and consider a
reaction-diffusion equation of the form \[
\dot{u}(t\,,x)=\tfrac12 u"(t\,,x) + b(u(t\,,x)) + \sigma(u(t\,,x))
\xi(t\,,x), \] on $\mathbb{R}_+\times[0\,,1]$, with homogeneous Dirichlet
boundary conditions and suitable initial data, in the case that there exists
$\varepsilon>0$ such that $\vert b(z)\vert \ge|z|(\log|z|)^{1+\varepsilon}$ for
all sufficiently-large values of $|z|$. When $\sigma\equiv 0$, it is well known
that such PDEs frequently have non-trivial stationary solutions. By contrast,
Bonder and Groisman (2009) have recently shown that there is finite-time blowup
when $\sigma$ is a non-zero constant. In this paper, we prove that the
Bonder--Groisman condition is unimproveable by showing that the
reaction-diffusion equation with noise is "typically" well posed when $\vert
b(z) \vert =O(|z|\log_+|z|)$ as $|z|\to\infty$. We interpret the word
"typically" in two essentially-different ways without altering the conclusions
of our assertions. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Statistics",
"Physics"
] |
Title: Parametric Analysis of Cherenkov Light LDF from EAS for High Energy Gamma Rays and Nuclei: Ways of Practical Application,
Abstract: In this paper we propose a 'knee-like' approximation of the lateral
distribution of the Cherenkov light from extensive air showers in the energy
range 30-3000 TeV and study a possibility of its practical application in high
energy ground-based gamma-ray astronomy experiments (in particular, in
TAIGA-HiSCORE). The approximation has a very good accuracy for individual
showers and can be easily simplified for practical application in the HiSCORE
wide angle timing array in the condition of a limited number of triggered
stations. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: On a Minkowski-like inequality for asymptotically flat static manifolds,
Abstract: The Minkowski inequality is a classical inequality in differential geometry,
giving a bound from below, on the total mean curvature of a convex surface in
Euclidean space, in terms of its area. Recently there has been interest in
proving versions of this inequality for manifolds other than R^n; for example,
such an inequality holds for surfaces in spatial Schwarzschild and
AdS-Schwarzschild manifolds. In this note, we adapt a recent analysis of Y. Wei
to prove a Minkowski-like inequality for general static asymptotically flat
manifolds. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Sublogarithmic Distributed Algorithms for Lovász Local lemma, and the Complexity Hierarchy,
Abstract: Locally Checkable Labeling (LCL) problems include essentially all the classic
problems of $\mathsf{LOCAL}$ distributed algorithms. In a recent enlightening
revelation, Chang and Pettie [arXiv 1704.06297] showed that any LCL (on bounded
degree graphs) that has an $o(\log n)$-round randomized algorithm can be solved
in $T_{LLL}(n)$ rounds, which is the randomized complexity of solving (a
relaxed variant of) the Lovász Local Lemma (LLL) on bounded degree $n$-node
graphs. Currently, the best known upper bound on $T_{LLL}(n)$ is $O(\log n)$,
by Chung, Pettie, and Su [PODC'14], while the best known lower bound is
$\Omega(\log\log n)$, by Brandt et al. [STOC'16]. Chang and Pettie conjectured
that there should be an $O(\log\log n)$-round algorithm.
Making the first step of progress towards this conjecture, and providing a
significant improvement on the algorithm of Chung et al. [PODC'14], we prove
that $T_{LLL}(n)= 2^{O(\sqrt{\log\log n})}$. Thus, any $o(\log n)$-round
randomized distributed algorithm for any LCL problem on bounded degree graphs
can be automatically sped up to run in $2^{O(\sqrt{\log\log n})}$ rounds.
Using this improvement and a number of other ideas, we also improve the
complexity of a number of graph coloring problems (in arbitrary degree graphs)
from the $O(\log n)$-round results of Chung, Pettie and Su [PODC'14] to
$2^{O(\sqrt{\log\log n})}$. These problems include defective coloring, frugal
coloring, and list vertex-coloring. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Testing isomorphism of lattices over CM-orders,
Abstract: A CM-order is a reduced order equipped with an involution that mimics complex
conjugation. The Witt-Picard group of such an order is a certain group of ideal
classes that is closely related to the "minus part" of the class group. We
present a deterministic polynomial-time algorithm for the following problem,
which may be viewed as a special case of the principal ideal testing problem:
given a CM-order, decide whether two given elements of its Witt-Picard group
are equal. In order to prevent coefficient blow-up, the algorithm operates with
lattices rather than with ideals. An important ingredient is a technique
introduced by Gentry and Szydlo in a cryptographic context. Our application of
it to lattices over CM-orders hinges upon a novel existence theorem for
auxiliary ideals, which we deduce from a result of Konyagin and Pomerance in
elementary number theory. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: A Multi-Objective Deep Reinforcement Learning Framework,
Abstract: This paper presents a new multi-objective deep reinforcement learning (MODRL)
framework based on deep Q-networks. We propose the use of linear and non-linear
methods to develop the MODRL framework that includes both single-policy and
multi-policy strategies. The experimental results on two benchmark problems
including the two-objective deep sea treasure environment and the
three-objective mountain car problem indicate that the proposed framework is
able to converge to the optimal Pareto solutions effectively. The proposed
framework is generic, which allows implementation of different deep
reinforcement learning algorithms in different complex environments. This
therefore overcomes many difficulties involved with standard multi-objective
reinforcement learning (MORL) methods existing in the current literature. The
framework creates a platform as a testbed environment to develop methods for
solving various problems associated with the current MORL. Details of the
framework implementation can be referred to
this http URL. | [
0,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Show, Attend and Interact: Perceivable Human-Robot Social Interaction through Neural Attention Q-Network,
Abstract: For a safe, natural and effective human-robot social interaction, it is
essential to develop a system that allows a robot to demonstrate the
perceivable responsive behaviors to complex human behaviors. We introduce the
Multimodal Deep Attention Recurrent Q-Network using which the robot exhibits
human-like social interaction skills after 14 days of interacting with people
in an uncontrolled real world. Each and every day during the 14 days, the
system gathered robot interaction experiences with people through a
hit-and-trial method and then trained the MDARQN on these experiences using
end-to-end reinforcement learning approach. The results of interaction based
learning indicate that the robot has learned to respond to complex human
behaviors in a perceivable and socially acceptable manner. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Sparse Gaussian Processes for Continuous-Time Trajectory Estimation on Matrix Lie Groups,
Abstract: Continuous-time trajectory representations are a powerful tool that can be
used to address several issues in many practical simultaneous localization and
mapping (SLAM) scenarios, like continuously collected measurements distorted by
robot motion, or during with asynchronous sensor measurements. Sparse Gaussian
processes (GP) allow for a probabilistic non-parametric trajectory
representation that enables fast trajectory estimation by sparse GP regression.
However, previous approaches are limited to dealing with vector space
representations of state only. In this technical report we extend the work by
Barfoot et al. [1] to general matrix Lie groups, by applying constant-velocity
prior, and defining locally linear GP. This enables using sparse GP approach in
a large space of practical SLAM settings. In this report we give the theory and
leave the experimental evaluation in future publications. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Radial anisotropy in omega Cen limiting the room for an intermediate-mass black hole,
Abstract: Finding an intermediate-mass black hole (IMBH) in a globular cluster (or
proving its absence) would provide valuable insights into our understanding of
galaxy formation and evolution. However, it is challenging to identify a unique
signature of an IMBH that cannot be accounted for by other processes.
Observational claims of IMBH detection are indeed often based on analyses of
the kinematics of stars in the cluster core, the most common signature being a
rise in the velocity dispersion profile towards the centre of the system.
Unfortunately, this IMBH signal is degenerate with the presence of
radially-biased pressure anisotropy in the globular cluster. To explore the
role of anisotropy in shaping the observational kinematics of clusters, we
analyse the case of omega Cen by comparing the observed profiles to those
calculated from the family of LIMEPY models, that account for the presence of
anisotropy in the system in a physically motivated way. The best-fit radially
anisotropic models reproduce the observational profiles well, and describe the
central kinematics as derived from Hubble Space Telescope proper motions
without the need for an IMBH. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Astrophysics"
] |
Title: Shrinking Horizon Model Predictive Control with Signal Temporal Logic Constraints under Stochastic Disturbances,
Abstract: We present Shrinking Horizon Model Predictive Control (SHMPC) for
discrete-time linear systems with Signal Temporal Logic (STL) specification
constraints under stochastic disturbances. The control objective is to maximize
an optimization function under the restriction that a given STL specification
is satisfied with high probability against stochastic uncertainties. We
formulate a general solution, which does not require precise knowledge of the
probability distributions of the (possibly dependent) stochastic disturbances;
only the bounded support intervals of the density functions and moment
intervals are used. For the specific case of disturbances that are independent
and normally distributed, we optimize the controllers further by utilizing
knowledge of the disturbance probability distributions. We show that in both
cases, the control law can be obtained by solving optimization problems with
linear constraints at each step. We experimentally demonstrate effectiveness of
this approach by synthesizing a controller for an HVAC system. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Analytic properties of approximate lattices,
Abstract: We introduce a notion of cocycle-induction for strong uniform approximate
lattices in locally compact second countable groups and use it to relate
(relative) Kazhdan- and Haagerup-type of approximate lattices to the
corresponding properties of the ambient locally compact groups. Our approach
applies to large classes of uniform approximate lattices (though not all of
them) and is flexible enough to cover the $L^p$-versions of Property (FH) and
a-(FH)-menability as well as quasified versions thereof a la Burger--Monod and
Ozawa. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Extremes of threshold-dependent Gaussian processes,
Abstract: In this contribution we are concerned with the asymptotic behaviour as $u\to
\infty$ of $\mathbb{P}\{\sup_{t\in [0,T]} X_u(t)> u\}$, where $X_u(t),t\in
[0,T],u>0$ is a family of centered Gaussian processes with continuous
trajectories. A key application of our findings concerns
$\mathbb{P}\{\sup_{t\in [0,T]} (X(t)+ g(t))> u\}$ as $u\to\infty$, for $X$ a
centered Gaussian process and $g$ some measurable trend function. Further
applications include the approximation of both the ruin time and the ruin
probability of the Brownian motion risk model with constant force of interest. | [
0,
0,
1,
1,
0,
0
] | [
"Mathematics",
"Statistics"
] |
Title: The null hypothesis of common jumps in case of irregular and asynchronous observations,
Abstract: This paper proposes novel tests for the absence of jumps in a univariate
semimartingale and for the absence of common jumps in a bivariate
semimartingale. Our methods rely on ratio statistics of power variations based
on irregular observations, sampled at different frequencies. We develop central
limit theorems for the statistics under the respective null hypotheses and
apply bootstrap procedures to assess the limiting distributions. Further we
define corrected statistics to improve the finite sample performance.
Simulations show that the test based on our corrected statistic yields good
results and even outperforms existing tests in the case of regular
observations. | [
0,
0,
1,
0,
0,
0
] | [
"Statistics",
"Quantitative Finance"
] |
Title: What Drives the International Development Agenda? An NLP Analysis of the United Nations General Debate 1970-2016,
Abstract: There is surprisingly little known about agenda setting for international
development in the United Nations (UN) despite it having a significant
influence on the process and outcomes of development efforts. This paper
addresses this shortcoming using a novel approach that applies natural language
processing techniques to countries' annual statements in the UN General Debate.
Every year UN member states deliver statements during the General Debate on
their governments' perspective on major issues in world politics. These
speeches provide invaluable information on state preferences on a wide range of
issues, including international development, but have largely been overlooked
in the study of global politics. This paper identifies the main international
development topics that states raise in these speeches between 1970 and 2016,
and examine the country-specific drivers of international development rhetoric. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Quantitative Finance"
] |
Title: Randomizing growing networks with a time-respecting null model,
Abstract: Complex networks are often used to represent systems that are not static but
grow with time: people make new friendships, new papers are published and refer
to the existing ones, and so forth. To assess the statistical significance of
measurements made on such networks, we propose a randomization methodology---a
time-respecting null model---that preserves both the network's degree sequence
and the time evolution of individual nodes' degree values. By preserving the
temporal linking patterns of the analyzed system, the proposed model is able to
factor out the effect of the system's temporal patterns on its structure. We
apply the model to the citation network of Physical Review scholarly papers and
the citation network of US movies. The model reveals that the two datasets are
strikingly different with respect to their degree-degree correlations, and we
discuss the important implications of this finding on the information provided
by paradigmatic node centrality metrics such as indegree and Google's PageRank.
The randomization methodology proposed here can be used to assess the
significance of any structural property in growing networks, which could bring
new insights into the problems where null models play a critical role, such as
the detection of communities and network motifs. | [
1,
1,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: The path to high-energy electron-positron colliders: from Wideroe's betatron to Touschek's AdA and to LEP,
Abstract: We describe the road which led to the construction and exploitation of
electron positron colliders, hightlighting how the young physics student Bruno
Touschek met the Norwegian engineer Rolf Wideroe in Germany, during WWII, and
collaborated in building the 15 MeV betatron, a secret project directed by
Wideroe and financed by the Ministry of Aviation of the Reich. This is how
Bruno Touschek learnt the science of making particle accelerators and was
ready, many years later, to propose and build AdA, the first electron positron
collider, in Frascati, Italy, in 1960. We shall then see how AdA was brought
from Frascati to Orsay, in France. Taking advantage of the Orsay Linear
Accelerator as injector, the Franco-Italian team was able to prove that
collisions had taken place, opening the way to the use of particle colliders as
a mean to explore high energy physics. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Dimensionality reduction for acoustic vehicle classification with spectral embedding,
Abstract: We propose a method for recognizing moving vehicles, using data from roadside
audio sensors. This problem has applications ranging widely, from traffic
analysis to surveillance. We extract a frequency signature from the audio
signal using a short-time Fourier transform, and treat each time window as an
individual data point to be classified. By applying a spectral embedding, we
decrease the dimensionality of the data sufficiently for K-nearest neighbors to
provide accurate vehicle identification. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Variational obstacle avoidance problem on Riemannian manifolds,
Abstract: We introduce variational obstacle avoidance problems on Riemannian manifolds
and derive necessary conditions for the existence of their normal extremals.
The problem consists of minimizing an energy functional depending on the
velocity and covariant acceleration, among a set of admissible curves, and also
depending on a navigation function used to avoid an obstacle on the workspace,
a Riemannian manifold.
We study two different scenarios, a general one on a Riemannian manifold and,
a sub-Riemannian problem. By introducing a left-invariant metric on a Lie
group, we also study the variational obstacle avoidance problem on a Lie group.
We apply the results to the obstacle avoidance problem of a planar rigid body
and an unicycle. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning,
Abstract: We present Deep Voice 3, a fully-convolutional attention-based neural
text-to-speech (TTS) system. Deep Voice 3 matches state-of-the-art neural
speech synthesis systems in naturalness while training ten times faster. We
scale Deep Voice 3 to data set sizes unprecedented for TTS, training on more
than eight hundred hours of audio from over two thousand speakers. In addition,
we identify common error modes of attention-based speech synthesis networks,
demonstrate how to mitigate them, and compare several different waveform
synthesis methods. We also describe how to scale inference to ten million
queries per day on one single-GPU server. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Two provably consistent divide and conquer clustering algorithms for large networks,
Abstract: In this article, we advance divide-and-conquer strategies for solving the
community detection problem in networks. We propose two algorithms which
perform clustering on a number of small subgraphs and finally patches the
results into a single clustering. The main advantage of these algorithms is
that they bring down significantly the computational cost of traditional
algorithms, including spectral clustering, semi-definite programs, modularity
based methods, likelihood based methods etc., without losing on accuracy and
even improving accuracy at times. These algorithms are also, by nature,
parallelizable. Thus, exploiting the facts that most traditional algorithms are
accurate and the corresponding optimization problems are much simpler in small
problems, our divide-and-conquer methods provide an omnibus recipe for scaling
traditional algorithms up to large networks. We prove consistency of these
algorithms under various subgraph selection procedures and perform extensive
simulations and real-data analysis to understand the advantages of the
divide-and-conquer approach in various settings. | [
0,
0,
1,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Effective Description of Higher-Order Scalar-Tensor Theories,
Abstract: Most existing theories of dark energy and/or modified gravity, involving a
scalar degree of freedom, can be conveniently described within the framework of
the Effective Theory of Dark Energy, based on the unitary gauge where the
scalar field is uniform. We extend this effective approach by allowing the
Lagrangian in unitary gauge to depend on the time derivative of the lapse
function. Although this dependence generically signals the presence of an extra
scalar degree of freedom, theories that contain only one propagating scalar
degree of freedom, in addition to the usual tensor modes, can be constructed by
requiring the initial Lagrangian to be degenerate. Starting from a general
quadratic action, we derive the dispersion relations for the linear
perturbations around Minkowski and a cosmological background. Our analysis
directly applies to the recently introduced Degenerate Higher-Order
Scalar-Tensor (DHOST) theories. For these theories, we find that one cannot
recover a Poisson-like equation in the static linear regime except for the
subclass that includes the Horndeski and so-called "beyond Horndeski" theories.
We also discuss Lorentz-breaking models inspired by Horava gravity. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Finite-dimensional Gaussian approximation with linear inequality constraints,
Abstract: Introducing inequality constraints in Gaussian process (GP) models can lead
to more realistic uncertainties in learning a great variety of real-world
problems. We consider the finite-dimensional Gaussian approach from Maatouk and
Bay (2017) which can satisfy inequality conditions everywhere (either
boundedness, monotonicity or convexity). Our contributions are threefold.
First, we extend their approach in order to deal with general sets of linear
inequalities. Second, we explore several Markov Chain Monte Carlo (MCMC)
techniques to approximate the posterior distribution. Third, we investigate
theoretical and numerical properties of the constrained likelihood for
covariance parameter estimation. According to experiments on both artificial
and real data, our full framework together with a Hamiltonian Monte Carlo-based
sampler provides efficient results on both data fitting and uncertainty
quantification. | [
1,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics"
] |
Title: Learning Rates for Kernel-Based Expectile Regression,
Abstract: Conditional expectiles are becoming an increasingly important tool in finance
as well as in other areas of applications. We analyse a support vector machine
type approach for estimating conditional expectiles and establish learning
rates that are minimax optimal modulo a logarithmic factor if Gaussian RBF
kernels are used and the desired expectile is smooth in a Besov sense. As a
special case, our learning rates improve the best known rates for kernel-based
least squares regression in this scenario. Key ingredients of our statistical
analysis are a general calibration inequality for the asymmetric least squares
loss, a corresponding variance bound as well as an improved entropy number
bound for Gaussian RBF kernels. | [
0,
0,
0,
1,
0,
0
] | [
"Statistics",
"Mathematics",
"Quantitative Finance"
] |
Title: Obstructions for three-coloring and list three-coloring $H$-free graphs,
Abstract: A graph is $H$-free if it has no induced subgraph isomorphic to $H$. We
characterize all graphs $H$ for which there are only finitely many minimal
non-three-colorable $H$-free graphs. Such a characterization was previously
known only in the case when $H$ is connected. This solves a problem posed by
Golovach et al. As a second result, we characterize all graphs $H$ for which
there are only finitely many $H$-free minimal obstructions for list
3-colorability. | [
1,
0,
0,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter,
Abstract: We present a new matched filter algorithm for direct detection of point
sources in the immediate vicinity of bright stars. The stellar Point Spread
Function (PSF) is first subtracted using a Karhunen-Loéve Image Processing
(KLIP) algorithm with Angular and Spectral Differential Imaging (ADI and SDI).
The KLIP-induced distortion of the astrophysical signal is included in the
matched filter template by computing a forward model of the PSF at every
position in the image. To optimize the performance of the algorithm, we conduct
extensive planet injection and recovery tests and tune the exoplanet spectra
template and KLIP reduction aggressiveness to maximize the Signal-to-Noise
Ratio (SNR) of the recovered planets. We show that only two spectral templates
are necessary to recover any young Jovian exoplanets with minimal SNR loss. We
also developed a complete pipeline for the automated detection of point source
candidates, the calculation of Receiver Operating Characteristics (ROC), false
positives based contrast curves, and completeness contours. We process in a
uniform manner more than 330 datasets from the Gemini Planet Imager Exoplanet
Survey (GPIES) and assess GPI typical sensitivity as a function of the star and
the hypothetical companion spectral type. This work allows for the first time a
comparison of different detection algorithms at a survey scale accounting for
both planet completeness and false positive rate. We show that the new forward
model matched filter allows the detection of $50\%$ fainter objects than a
conventional cross-correlation technique with a Gaussian PSF template for the
same false positive rate. | [
0,
1,
0,
0,
0,
0
] | [
"Physics",
"Astrophysics"
] |
Title: On the Solution of Linear Programming Problems in the Age of Big Data,
Abstract: The Big Data phenomenon has spawned large-scale linear programming problems.
In many cases, these problems are non-stationary. In this paper, we describe a
new scalable algorithm called NSLP for solving high-dimensional, non-stationary
linear programming problems on modern cluster computing systems. The algorithm
consists of two phases: Quest and Targeting. The Quest phase calculates a
solution of the system of inequalities defining the constraint system of the
linear programming problem under the condition of dynamic changes in input
data. To this end, the apparatus of Fejer mappings is used. The Targeting phase
forms a special system of points having the shape of an n-dimensional
axisymmetric cross. The cross moves in the n-dimensional space in such a way
that the solution of the linear programming problem is located all the time in
an "-vicinity of the central point of the cross. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Plasmonic properties of refractory titanium nitride,
Abstract: The development of plasmonic and metamaterial devices requires the research
of high-performance materials, alternative to standard noble metals. Renewed as
refractory stable compound for durable coatings, titanium nitride has been
recently proposed as an efficient plasmonic material. Here, by using a first
principles approach, we investigate the plasmon dispersion relations of TiN
bulk and we predict the effect of pressure on its optoelectronic properties.
Our results explain the main features of TiN in the visible range and prove a
universal scaling law which relates its mechanical and plasmonic properties as
a function of pressure. Finally, we address the formation and stability of
surface-plasmon polaritons at different TiN/dielectric interfaces proposed by
recent experiments. The unusual combination of plasmonics and refractory
features paves the way for the realization of plasmonic devices able to work at
conditions not sustainable by usual noble metals. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Improving Community Detection by Mining Social Interactions,
Abstract: Social relationships can be divided into different classes based on the
regularity with which they occur and the similarity among them. Thus, rare and
somewhat similar relationships are random and cause noise in a social network,
thus hiding the actual structure of the network and preventing an accurate
analysis of it. In this context, in this paper we propose a process to handle
social network data that exploits temporal features to improve the detection of
communities by existing algorithms. By removing random interactions, we observe
that social networks converge to a topology with more purely social
relationships and more modular communities. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: The set of quantum correlations is not closed,
Abstract: We construct a linear system non-local game which can be played perfectly
using a limit of finite-dimensional quantum strategies, but which cannot be
played perfectly on any finite-dimensional Hilbert space, or even with any
tensor-product strategy. In particular, this shows that the set of
(tensor-product) quantum correlations is not closed. The constructed non-local
game provides another counterexample to the "middle" Tsirelson problem, with a
shorter proof than our previous paper (though at the loss of the universal
embedding theorem). We also show that it is undecidable to determine if a
linear system game can be played perfectly with a finite-dimensional strategy,
or a limit of finite-dimensional quantum strategies. | [
0,
0,
1,
0,
0,
0
] | [
"Physics",
"Mathematics"
] |
Title: Transferring Agent Behaviors from Videos via Motion GANs,
Abstract: A major bottleneck for developing general reinforcement learning agents is
determining rewards that will yield desirable behaviors under various
circumstances. We introduce a general mechanism for automatically specifying
meaningful behaviors from raw pixels. In particular, we train a generative
adversarial network to produce short sub-goals represented through motion
templates. We demonstrate that this approach generates visually meaningful
behaviors in unknown environments with novel agents and describe how these
motions can be used to train reinforcement learning agents. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Effective computation of $\mathrm{SO}(3)$ and $\mathrm{O}(3)$ linear representations symmetry classes,
Abstract: We propose a general algorithm to compute all the symmetry classes of any
$\mathrm{SO}(3)$ or $\mathrm{O}(3)$ linear representation. This method relies
on the introduction of a binary operator between sets of conjugacy classes of
closed subgroups, called the clips. We compute explicit tables for this
operation which allows to solve definitively the problem. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: Mass Conservative and Energy Stable Finite Difference Methods for the Quasi-incompressible Navier-Stokes-Cahn-Hilliard system: Primitive Variable and Projection-Type Schemes,
Abstract: In this paper we describe two fully mass conservative, energy stable, finite
difference methods on a staggered grid for the quasi-incompressible
Navier-Stokes-Cahn-Hilliard (q-NSCH) system governing a binary incompressible
fluid flow with variable density and viscosity. Both methods, namely the
primitive method (finite difference method in the primitive variable
formulation) and the projection method (finite difference method in a
projection-type formulation), are so designed that the mass of the binary fluid
is preserved, and the energy of the system equations is always non-increasing
in time at the fully discrete level. We also present an efficient, practical
nonlinear multigrid method - comprised of a standard FAS method for the
Cahn-Hilliard equation, and a method based on the Vanka-type smoothing strategy
for the Navier-Stokes equation - for solving these equations. We test the
scheme in the context of Capillary Waves, rising droplets and Rayleigh-Taylor
instability. Quantitative comparisons are made with existing analytical
solutions or previous numerical results that validate the accuracy of our
numerical schemes. Moreover, in all cases, mass of the single component and the
binary fluid was conserved up to 10 to -8 and energy decreases in time. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics",
"Computer Science"
] |
Title: Automatic Trimap Generation for Image Matting,
Abstract: Image matting is a longstanding problem in computational photography.
Although, it has been studied for more than two decades, yet there is a
challenge of developing an automatic matting algorithm which does not require
any human efforts. Most of the state-of-the-art matting algorithms require
human intervention in the form of trimap or scribbles to generate the alpha
matte form the input image. In this paper, we present a simple and efficient
approach to automatically generate the trimap from the input image and make the
whole matting process free from human-in-the-loop. We use learning based
matting method to generate the matte from the automatically generated trimap.
Experimental results demonstrate that our method produces good quality trimap
which results into accurate matte estimation. We validate our results by
replacing the automatically generated trimap by manually created trimap while
using the same image matting algorithm. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: X-ray spectral properties of seven heavily obscured Seyfert 2 galaxies,
Abstract: We present the combined Chandra and Swift-BAT spectral analysis of seven
Seyfert 2 galaxies selected from the Swift-BAT 100-month catalog. We selected
nearby (z<=0.03) sources lacking of a ROSAT counterpart and never previously
observed with Chandra in the 0.3-10 keV energy range, and targeted these
objects with 10 ks Chandra ACIS-S observations. The X-ray spectral fitting over
the 0.3-150 keV energy range allows us to determine that all the objects are
significantly obscured, having NH>=1E23 cm^(-2) at a >99% confidence level.
Moreover, one to three sources are candidate Compton thick Active Galactic
Nuclei (CT-AGN), i.e., have NH>=1E24 cm^(-2). We also test the recent "spectral
curvature" method developed by Koss et al. (2016) to find candidate CT-AGN,
finding a good agreement between our results and their predictions. Since the
selection criteria we adopted have been effective in detecting highly obscured
AGN, further observations of these and other Seyfert 2 galaxies selected from
the Swift-BAT 100-month catalog will allow us to create a statistically
significant sample of highly obscured AGN, therefore better understanding the
physics of the obscuration processes. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Automatic Analysis of EEGs Using Big Data and Hybrid Deep Learning Architectures,
Abstract: Objective: A clinical decision support tool that automatically interprets
EEGs can reduce time to diagnosis and enhance real-time applications such as
ICU monitoring. Clinicians have indicated that a sensitivity of 95% with a
specificity below 5% was the minimum requirement for clinical acceptance. We
propose a highperformance classification system based on principles of big data
and machine learning. Methods: A hybrid machine learning system that uses
hidden Markov models (HMM) for sequential decoding and deep learning networks
for postprocessing is proposed. These algorithms were trained and evaluated
using the TUH EEG Corpus, which is the world's largest publicly available
database of clinical EEG data. Results: Our approach delivers a sensitivity
above 90% while maintaining a specificity below 5%. This system detects three
events of clinical interest: (1) spike and/or sharp waves, (2) periodic
lateralized epileptiform discharges, (3) generalized periodic epileptiform
discharges. It also detects three events used to model background noise: (1)
artifacts, (2) eye movement (3) background. Conclusions: A hybrid HMM/deep
learning system can deliver a low false alarm rate on EEG event detection,
making automated analysis a viable option for clinicians. Significance: The TUH
EEG Corpus enables application of highly data consumptive machine learning
algorithms to EEG analysis. Performance is approaching clinical acceptance for
real-time applications. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Implementing a Concept Network Model,
Abstract: The same concept can mean different things or be instantiated in different
forms depending on context, suggesting a degree of flexibility within the
conceptual system. We propose that a compositional network model can be used to
capture and predict this flexibility. We modeled individual concepts (e.g.,
BANANA, BOTTLE) as graph-theoretical networks, in which properties (e.g.,
YELLOW, SWEET) were represented as nodes and their associations as edges. In
this framework, networks capture the within-concept statistics that reflect how
properties correlate with each other across instances of a concept. We ran a
classification analysis using graph eigendecomposition to validate these
models, and find that these models can successfully discriminate between object
concepts. We then computed formal measures from these concept networks and
explored their relationship to conceptual structure. We find that diversity
coefficients and core-periphery structure can be interpreted as network-based
measures of conceptual flexibility and stability, respectively. These results
support the feasibility of a concept network framework and highlight its
ability to formally capture important characteristics of the conceptual system. | [
0,
0,
0,
0,
1,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Generalized Sheet Transition Conditions (GSTCs) for a Metascreen -- A Fishnet Metasurface,
Abstract: We used a multiple-scale homogenization method to derive generalized sheet
transition conditions (GSTCs) for electromagnetic fields at the surface of a
metascreen---a metasurface with a "fishnet" structure. These surfaces are
characterized by periodically-spaced arbitrary-shaped apertures in an otherwise
relatively impenetrable surface. The parameters in these GSTCs are interpreted
as effective surface susceptibilities and surface porosities, which are related
to the geometry of the apertures that constitute the metascreen. Finally, we
emphasize the subtle but important difference between the GSTCs required for
metascreens and those required for metafilms (a metasurface with a "cermet"
structure, i.e., an array of isolated (non-touching) scatterers). | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: $k^{τ,ε}$-anonymity: Towards Privacy-Preserving Publishing of Spatiotemporal Trajectory Data,
Abstract: Mobile network operators can track subscribers via passive or active
monitoring of device locations. The recorded trajectories offer an
unprecedented outlook on the activities of large user populations, which
enables developing new networking solutions and services, and scaling up
studies across research disciplines. Yet, the disclosure of individual
trajectories raises significant privacy concerns: thus, these data are often
protected by restrictive non-disclosure agreements that limit their
availability and impede potential usages. In this paper, we contribute to the
development of technical solutions to the problem of privacy-preserving
publishing of spatiotemporal trajectories of mobile subscribers. We propose an
algorithm that generalizes the data so that they satisfy
$k^{\tau,\epsilon}$-anonymity, an original privacy criterion that thwarts
attacks on trajectories. Evaluations with real-world datasets demonstrate that
our algorithm attains its objective while retaining a substantial level of
accuracy in the data. Our work is a step forward in the direction of open,
privacy-preserving datasets of spatiotemporal trajectories. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Life-span of blowup solutions to semilinear wave equation with space-dependent critical damping,
Abstract: This paper is concerned with the blowup phenomena for initial value problem
of semilinear wave equation with critical space-dependent damping term
(DW:$V$). The main result of the present paper is to give a solution of the
problem and to provide a sharp estimate for lifespan for such a solution when
$\frac{N}{N-1}<p\leq p_S(N+V_0)$, where $p_S(N)$ is the Strauss exponent for
(DW:$0$). The main idea of the proof is due to the technique of test functions
for (DW:$0$) originated by Zhou--Han (2014, MR3169791). Moreover, we find a new
threshold value $V_0=\frac{(N-1)^2}{N+1}$ for the coefficient of critical and
singular damping $|x|^{-1}$. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Physics"
] |
Title: The Word Problem of $\mathbb{Z}^n$ Is a Multiple Context-Free Language,
Abstract: The \emph{word problem} of a group $G = \langle \Sigma \rangle$ can be
defined as the set of formal words in $\Sigma^*$ that represent the identity in
$G$. When viewed as formal languages, this gives a strong connection between
classes of groups and classes of formal languages. For example, Anisimov showed
that a group is finite if and only if its word problem is a regular language,
and Muller and Schupp showed that a group is virtually-free if and only if its
word problem is a context-free language. Above this, not much was known, until
Salvati showed recently that the word problem of $\mathbb{Z}^2$ is a multiple
context-free language, giving first such example. We generalize Salvati's
result to show that the word problem of $\mathbb{Z}^n$ is a multiple
context-free language for any $n$. | [
1,
0,
1,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Learning Policies for Markov Decision Processes from Data,
Abstract: We consider the problem of learning a policy for a Markov decision process
consistent with data captured on the state-actions pairs followed by the
policy. We assume that the policy belongs to a class of parameterized policies
which are defined using features associated with the state-action pairs. The
features are known a priori, however, only an unknown subset of them could be
relevant. The policy parameters that correspond to an observed target policy
are recovered using $\ell_1$-regularized logistic regression that best fits the
observed state-action samples. We establish bounds on the difference between
the average reward of the estimated and the original policy (regret) in terms
of the generalization error and the ergodic coefficient of the underlying
Markov chain. To that end, we combine sample complexity theory and sensitivity
analysis of the stationary distribution of Markov chains. Our analysis suggests
that to achieve regret within order $O(\sqrt{\epsilon})$, it suffices to use
training sample size on the order of $\Omega(\log n \cdot poly(1/\epsilon))$,
where $n$ is the number of the features. We demonstrate the effectiveness of
our method on a synthetic robot navigation example. | [
1,
0,
1,
1,
0,
0
] | [
"Computer Science",
"Statistics"
] |
Title: Singular Riemannian flows and characteristic numbers,
Abstract: Let $M$ be an even-dimensional, oriented closed manifold. We show that the
restriction of a singular Riemannian flow on $M$ to a small tubular
neighborhood of each connected component of its singular stratum is
foliated-diffeomorphic to an isometric flow on the same neighborhood. We then
prove a formula that computes characteristic numbers of $M$ as the sum of
residues associated to the infinitesimal foliation at the components of the
singular stratum of the flow. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Transformation Models in High-Dimensions,
Abstract: Transformation models are a very important tool for applied statisticians and
econometricians. In many applications, the dependent variable is transformed so
that homogeneity or normal distribution of the error holds. In this paper, we
analyze transformation models in a high-dimensional setting, where the set of
potential covariates is large. We propose an estimator for the transformation
parameter and we show that it is asymptotically normally distributed using an
orthogonalized moment condition where the nuisance functions depend on the
target parameter. In a simulation study, we show that the proposed estimator
works well in small samples. A common practice in labor economics is to
transform wage with the log-function. In this study, we test if this
transformation holds in CPS data from the United States. | [
0,
0,
1,
1,
0,
0
] | [
"Statistics",
"Quantitative Finance"
] |
Title: On the role of synaptic stochasticity in training low-precision neural networks,
Abstract: Stochasticity and limited precision of synaptic weights in neural network
models are key aspects of both biological and hardware modeling of learning
processes. Here we show that a neural network model with stochastic binary
weights naturally gives prominence to exponentially rare dense regions of
solutions with a number of desirable properties such as robustness and good
generalization performance, while typical solutions are isolated and hard to
find. Binary solutions of the standard perceptron problem are obtained from a
simple gradient descent procedure on a set of real values parametrizing a
probability distribution over the binary synapses. Both analytical and
numerical results are presented. An algorithmic extension aimed at training
discrete deep neural networks is also investigated. | [
1,
1,
0,
1,
0,
0
] | [
"Computer Science",
"Quantitative Biology"
] |
Title: Characterization of polynomials whose large powers have all positive coefficients,
Abstract: We give a criterion which characterizes a homogeneous real multi-variate
polynomial to have the property that all sufficiently large powers of the
polynomial (as well as their products with any given positive homogeneous
polynomial) have positive coefficients. Our result generalizes a result of De
Angelis, which corresponds to the case of homogeneous bi-variate polynomials,
as well as a classical result of Pólya, which corresponds to the case of a
specific linear polynomial. As an application, we also give a characterization
of certain polynomial beta functions, which are the spectral radius functions
of the defining matrix functions of Markov chains. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Bloch line dynamics within moving domain walls in 3D ferromagnets,
Abstract: We study field-driven magnetic domain wall dynamics in garnet strips by
large-scale three-dimensional micromagnetic simulations. The domain wall
propagation velocity as a function of the applied field exhibits a low-field
linear part terminated by a sudden velocity drop at a threshold field
magnitude, related to the onset of excitations of internal degrees of freedom
of the domain wall magnetization. By considering a wide range of strip
thicknesses from 30 nm to 1.89 $\mu$m, we find a non-monotonic thickness
dependence of the threshold field for the onset of this instability, proceeding
via nucleation and propagation of Bloch lines within the domain wall. We
identify a critical strip thickness above which the velocity drop is due to
nucleation of horizontal Bloch lines, while for thinner strips and depending on
the boundary conditions employed, either generation of vertical Bloch lines, or
close-to-uniform precession of the domain wall internal magnetization takes
place. For strips of intermediate thicknesses, the vertical Bloch lines assume
a deformed structure due to demagnetizing fields at the strip surfaces,
breaking the symmetry between the top and bottom faces of the strip, and
resulting in circulating Bloch line dynamics along the perimeter of the domain
wall. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Strong Metric Subregularity of Mappings in Variational Analysis and Optimization,
Abstract: Although the property of strong metric subregularity of set-valued mappings
has been present in the literature under various names and with various
definitions for more than two decades, it has attracted much less attention
than its older "siblings", the metric regularity and the strong metric
regularity. The purpose of this paper is to show that the strong metric
subregularity shares the main features of these two most popular regularity
properties and is not less instrumental in applications. We show that the
strong metric subregularity of a mapping F acting between metric spaces is
stable under perturbations of the form f + F, where f is a function with a
small calmness constant. This result is parallel to the Lyusternik-Graves
theorem for metric regularity and to the Robinson theorem for strong
regularity, where the perturbations are represented by a function f with a
small Lipschitz constant. Then we study perturbation stability of the same kind
for mappings acting between Banach spaces, where f is not necessarily
differentiable but admits a set-valued derivative-like approximation. Strong
metric q-subregularity is also considered, where q is a positive real constant
appearing as exponent in the definition. Rockafellar's criterion for strong
metric subregularity involving injectivity of the graphical derivative is
extended to mappings acting in infinite-dimensional spaces. A sufficient
condition for strong metric subregularity is established in terms of
surjectivity of the Frechet coderivative. Various versions of Newton's method
for solving generalized equations are considered including inexact and
semismooth methods, for which superlinear convergence is shown under strong
metric subregularity. | [
0,
0,
1,
0,
0,
0
] | [
"Mathematics"
] |
Title: Systematic Identification of LAEs for Visible Exploration and Reionization Research Using Subaru HSC (SILVERRUSH). I. Program Strategy and Clustering Properties of ~2,000 Lya Emitters at z=6-7 over the 0.3-0.5 Gpc$^2$ Survey Area,
Abstract: We present the SILVERRUSH program strategy and clustering properties
investigated with $\sim 2,000$ Ly$\alpha$ emitters at $z=5.7$ and $6.6$ found
in the early data of the Hyper Suprime-Cam (HSC) Subaru Strategic Program
survey exploiting the carefully designed narrowband filters. We derive angular
correlation functions with the unprecedentedly large samples of LAEs at $z=6-7$
over the large total area of $14-21$ deg$^2$ corresponding to $0.3-0.5$
comoving Gpc$^2$. We obtain the average large-scale bias values of $b_{\rm
avg}=4.1\pm 0.2$ ($4.5\pm 0.6$) at $z=5.7$ ($z=6.6$) for $\gtrsim L^*$ LAEs,
indicating the weak evolution of LAE clustering from $z=5.7$ to $6.6$. We
compare the LAE clustering results with two independent theoretical models that
suggest an increase of an LAE clustering signal by the patchy ionized bubbles
at the epoch of reionization (EoR), and estimate the neutral hydrogen fraction
to be $x_{\rm HI}=0.15^{+0.15}_{-0.15}$ at $z=6.6$. Based on the halo
occupation distribution models, we find that the $\gtrsim L^*$ LAEs are hosted
by the dark-matter halos with the average mass of $\log (\left < M_{\rm h}
\right >/M_\odot) =11.1^{+0.2}_{-0.4}$ ($10.8^{+0.3}_{-0.5}$) at $z=5.7$
($6.6$) with a Ly$\alpha$ duty cycle of 1 % or less, where the results of
$z=6.6$ LAEs may be slightly biased, due to the increase of the clustering
signal at the EoR. Our clustering analysis reveals the low-mass nature of
$\gtrsim L^*$ LAEs at $z=6-7$, and that these LAEs probably evolve into massive
super-$L^*$ galaxies in the present-day universe. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Asymptotic Enumeration of Compacted Binary Trees,
Abstract: A compacted tree is a graph created from a binary tree such that repeatedly
occurring subtrees in the original tree are represented by pointers to existing
ones, and hence every subtree is unique. Such representations form a special
class of directed acyclic graphs. We are interested in the asymptotic number of
compacted trees of given size, where the size of a compacted tree is given by
the number of its internal nodes. Due to its superexponential growth this
problem poses many difficulties. Therefore we restrict our investigations to
compacted trees of bounded right height, which is the maximal number of edges
going to the right on any path from the root to a leaf.
We solve the asymptotic counting problem for this class as well as a closely
related, further simplified class.
For this purpose, we develop a calculus on exponential generating functions
for compacted trees of bounded right height and for relaxed trees of bounded
right height, which differ from compacted trees by dropping the above described
uniqueness condition. This enables us to derive a recursively defined sequence
of differential equations for the exponential generating functions. The
coefficients can then be determined by performing a singularity analysis of the
solutions of these differential equations.
Our main results are the computation of the asymptotic numbers of relaxed as
well as compacted trees of bounded right height and given size, when the size
tends to infinity. | [
1,
0,
0,
0,
0,
0
] | [
"Mathematics",
"Computer Science"
] |
Title: Stable and unstable vortex knots in a trapped Bose-Einstein condensate,
Abstract: The dynamics of a quantum vortex torus knot ${\cal T}_{P,Q}$ and similar
knots in an atomic Bose-Einstein condensate at zero temperature in the
Thomas-Fermi regime has been considered in the hydrodynamic approximation. The
condensate has a spatially nonuniform equilibrium density profile $\rho(z,r)$
due to an external axisymmetric potential. It is assumed that $z_*=0$, $r_*=1$
is a maximum point for function $r\rho(z,r)$, with $\delta
(r\rho)\approx-(\alpha-\epsilon) z^2/2 -(\alpha+\epsilon) (\delta r)^2/2$ at
small $z$ and $\delta r$. Configuration of knot in the cylindrical coordinates
is specified by a complex $2\pi P$-periodic function
$A(\varphi,t)=Z(\varphi,t)+i [R(\varphi,t)-1]$. In the case $|A|\ll 1$ the
system is described by relatively simple approximate equations for re-scaled
functions $W_n(\varphi)\propto A(2\pi n+\varphi)$, where $n=0,\dots,P-1$, and
$iW_{n,t}=-(W_{n,\varphi\varphi}+\alpha W_n -\epsilon W_n^*)/2-\sum_{j\neq
n}1/(W_n^*-W_j^*)$. At $\epsilon=0$, numerical examples of stable solutions as
$W_n=\theta_n(\varphi-\gamma t)\exp(-i\omega t)$ with non-trivial topology have
been found for $P=3$. Besides that, dynamics of various non-stationary knots
with $P=3$ was simulated, and in some cases a tendency towards a finite-time
singularity has been detected. For $P=2$ at small $\epsilon\neq 0$, rotating
around $z$ axis configurations of the form $(W_0-W_1)\approx
B_0\exp(i\zeta)+\epsilon C(B_0,\alpha)\exp(-i\zeta) + \epsilon
D(B_0,\alpha)\exp(3i\zeta)$ have been investigated, where $B_0>0$ is an
arbitrary constant, $\zeta=k_0\varphi -\Omega_0 t+\zeta_0$, $k_0=Q/2$,
$\Omega_0=(k_0^2-\alpha)/2-2/B_0^2$. In the parameter space $(\alpha, B_0)$,
wide stability regions for such solutions have been found. In unstable bands, a
recurrence of the vortex knot to a weakly excited state has been noted to be
possible. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Title: Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems,
Abstract: In this paper, we introduce a stochastic projected subgradient method for
weakly convex (i.e., uniformly prox-regular) nonsmooth, nonconvex functions---a
wide class of functions which includes the additive and convex composite
classes. At a high-level, the method is an inexact proximal point iteration in
which the strongly convex proximal subproblems are quickly solved with a
specialized stochastic projected subgradient method. The primary contribution
of this paper is a simple proof that the proposed algorithm converges at the
same rate as the stochastic gradient method for smooth nonconvex problems. This
result appears to be the first convergence rate analysis of a stochastic (or
even deterministic) subgradient method for the class of weakly convex
functions. | [
1,
0,
1,
0,
0,
0
] | [
"Mathematics",
"Computer Science",
"Statistics"
] |
Title: Solvable Integration Problems and Optimal Sample Size Selection,
Abstract: We compute the integral of a function or the expectation of a random variable
with minimal cost and use, for our new algorithm and for upper bounds of the
complexity, i.i.d. samples. Under certain assumptions it is possible to select
a sample size based on a variance estimation, or -- more generally -- based on
an estimation of a (central absolute) $p$-moment. That way one can guarantee a
small absolute error with high probability, the problem is thus called
solvable. The expected cost of the method depends on the $p$-moment of the
random variable, which can be arbitrarily large.
In order to prove the optimality of our algorithm we also provide lower
bounds. These bounds apply not only to methods based on i.i.d. samples but also
to general randomized algorithms. They show that -- up to constants -- the cost
of the algorithm is optimal in terms of accuracy, confidence level, and norm of
the particular input random variable. Since the considered classes of random
variables or integrands are very large, the worst case cost would be infinite.
Nevertheless one can define adaptive stopping rules such that for each input
the expected cost is finite.
We contrast these positive results with examples of integration problems that
are not solvable. | [
0,
0,
0,
1,
0,
0
] | [
"Mathematics",
"Statistics"
] |
Title: Label Stability in Multiple Instance Learning,
Abstract: We address the problem of \emph{instance label stability} in multiple
instance learning (MIL) classifiers. These classifiers are trained only on
globally annotated images (bags), but often can provide fine-grained
annotations for image pixels or patches (instances). This is interesting for
computer aided diagnosis (CAD) and other medical image analysis tasks for which
only a coarse labeling is provided. Unfortunately, the instance labels may be
unstable. This means that a slight change in training data could potentially
lead to abnormalities being detected in different parts of the image, which is
undesirable from a CAD point of view. Despite MIL gaining popularity in the CAD
literature, this issue has not yet been addressed. We investigate the stability
of instance labels provided by several MIL classifiers on 5 different datasets,
of which 3 are medical image datasets (breast histopathology, diabetic
retinopathy and computed tomography lung images). We propose an unsupervised
measure to evaluate instance stability, and demonstrate that a
performance-stability trade-off can be made when comparing MIL classifiers. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Statistics",
"Quantitative Biology"
] |
Title: Event-Radar: Real-time Local Event Detection System for Geo-Tagged Tweet Streams,
Abstract: The local event detection is to use posting messages with geotags on social
networks to reveal the related ongoing events and their locations. Recent
studies have demonstrated that the geo-tagged tweet stream serves as an
unprecedentedly valuable source for local event detection. Nevertheless, how to
effectively extract local events from large geo-tagged tweet streams in real
time remains challenging. A robust and efficient cloud-based real-time local
event detection software system would benefit various aspects in the real-life
society, from shopping recommendation for customer service providers to
disaster alarming for emergency departments. We use the preliminary research
GeoBurst as a starting point, which proposed a novel method to detect local
events. GeoBurst+ leverages a novel cross-modal authority measure to identify
several pivots in the query window. Such pivots reveal different geo-topical
activities and naturally attract related tweets to form candidate events. It
further summarises the continuous stream and compares the candidates against
the historical summaries to pinpoint truly interesting local events. We mainly
implement a website demonstration system Event-Radar with an improved algorithm
to show the real-time local events online for public interests. Better still,
as the query window shifts, our method can update the event list with little
time cost, thus achieving continuous monitoring of the stream. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science"
] |
Title: Understanding Geometry of Encoder-Decoder CNNs,
Abstract: Encoder-decoder networks using convolutional neural network (CNN)
architecture have been extensively used in deep learning literatures thanks to
its excellent performance for various inverse problems in computer vision,
medical imaging, etc. However, it is still difficult to obtain coherent
geometric view why such an architecture gives the desired performance. Inspired
by recent theoretical understanding on generalizability, expressivity and
optimization landscape of neural networks, as well as the theory of
convolutional framelets, here we provide a unified theoretical framework that
leads to a better understanding of geometry of encoder-decoder CNNs. Our
unified mathematical framework shows that encoder-decoder CNN architecture is
closely related to nonlinear basis representation using combinatorial
convolution frames, whose expressibility increases exponentially with the
network depth. We also demonstrate the importance of skipped connection in
terms of expressibility, and optimization landscape. | [
1,
0,
0,
1,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Proceedings Eighth Workshop on Intersection Types and Related Systems,
Abstract: This volume contains a final and revised selection of papers presented at the
Eighth Workshop on Intersection Types and Related Systems (ITRS 2016), held on
June 26, 2016 in Porto, in affiliation with FSCD 2016. | [
1,
0,
0,
0,
0,
0
] | [
"Computer Science",
"Mathematics"
] |
Title: Cluster-based Haldane state in edge-shared tetrahedral spin-cluster chain: Fedotovite K$_2$Cu$_3$O(SO$_4$)$_3$,
Abstract: Fedotovite K$_2$Cu$_3$O(SO$_4$)$_3$ is a candidate of new quantum spin
systems, in which the edge-shared tetrahedral (EST) spin-clusters consisting of
Cu$^{2+}$ are connected by weak inter-cluster couplings to from one-dimensional
array. Comprehensive experimental studies by magnetic susceptibility,
magnetization, heat capacity, and inelastic neutron scattering measurements
reveal the presence of an effective $S$ = 1 Haldane state below $T \cong 4$ K.
Rigorous theoretical studies provide an insight into the magnetic state of
K$_2$Cu$_3$O(SO$_4$)$_3$: an EST cluster makes a triplet in the ground state
and one-dimensional chain of the EST induces a cluster-based Haldane state. We
predict that the cluster-based Haldene state emerges whenever the number of
tetrahedra in the EST is $even$. | [
0,
1,
0,
0,
0,
0
] | [
"Physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.