text
stringlengths 57
2.88k
| labels
sequencelengths 6
6
|
---|---|
Title: A Novel Partitioning Method for Accelerating the Block Cimmino Algorithm,
Abstract: We propose a novel block-row partitioning method in order to improve the
convergence rate of the block Cimmino algorithm for solving general sparse
linear systems of equations. The convergence rate of the block Cimmino
algorithm depends on the orthogonality among the block rows obtained by the
partitioning method. The proposed method takes numerical orthogonality among
block rows into account by proposing a row inner-product graph model of the
coefficient matrix. In the graph partitioning formulation defined on this graph
model, the partitioning objective of minimizing the cutsize directly
corresponds to minimizing the sum of inter-block inner products between block
rows thus leading to an improvement in the eigenvalue spectrum of the iteration
matrix. This in turn leads to a significant reduction in the number of
iterations required for convergence. Extensive experiments conducted on a large
set of matrices confirm the validity of the proposed method against a
state-of-the-art method. | [
1,
0,
0,
0,
0,
0
] |
Title: Visualized Insights into the Optimization Landscape of Fully Convolutional Networks,
Abstract: Many image processing tasks involve image-to-image mapping, which can be
addressed well by fully convolutional networks (FCN) without any heavy
preprocessing. Although empirically designing and training FCNs can achieve
satisfactory results, reasons for the improvement in performance are slightly
ambiguous. Our study is to make progress in understanding their generalization
abilities through visualizing the optimization landscapes. The visualization of
objective functions is obtained by choosing a solution and projecting its
vicinity onto a 3D space. We compare three FCN-based networks (two existing
models and a new proposed in this paper for comparison) on multiple datasets.
It has been observed in practice that the connections from the pre-pooled
feature maps to the post-upsampled can achieve better results. We investigate
the cause and provide experiments to shows that the skip-layer connections in
FCN can promote flat optimization landscape, which is well known to generalize
better. Additionally, we explore the relationship between the models
generalization ability and loss surface under different batch sizes. Results
show that large-batch training makes the model converge to sharp minimizers
with chaotic vicinities while small-batch method leads the model to flat
minimizers with smooth and nearly convex regions. Our work may contribute to
insights and analysis for designing and training FCNs. | [
1,
0,
0,
1,
0,
0
] |
Title: Exact upper and lower bounds on the misclassification probability,
Abstract: Exact lower and upper bounds on the best possible misclassification
probability for a finite number of classes are obtained in terms of the total
variation norms of the differences between the sub-distributions over the
classes. These bounds are compared with the exact bounds in terms of the
conditional entropy obtained by Feder and Merhav. | [
1,
0,
1,
1,
0,
0
] |
Title: BB-Graph: A Subgraph Isomorphism Algorithm for Efficiently Querying Big Graph Databases,
Abstract: The big graph database model provides strong modeling for complex
applications and efficient querying. However, it is still a big challenge to
find all exact matches of a query graph in a big graph database, which is known
as the subgraph isomorphism problem. The current subgraph isomorphism
approaches are built on Ullmann's idea of focusing on the strategy of pruning
out the irrelevant candidates. Nevertheless, the existing pruning techniques
need much more improvement to efficiently handle complex queries. Moreover,
many of those existing algorithms need large indices requiring extra memory
consumption. Motivated by these, we introduce a new subgraph isomorphism
algorithm, named as BB-Graph, for querying big graph databases efficiently
without requiring a large data structure to be stored in main memory. We test
and compare our proposed BB-Graph algorithm with two popular existing
approaches, GraphQL and Cypher. Our experiments are done on three different
data sets; (1) a very big graph database of a real-life population database,
(2) a graph database of a simulated bank database, and (3) the publicly
available World Cup big graph database. We show that our solution performs
better than those algorithms mentioned here for most of the query types
experimented on these big databases. | [
1,
0,
0,
0,
0,
0
] |
Title: Analytic solutions of the Madelung equation,
Abstract: We present analytic self-similar solutions for the one, two and three
dimensional Madelung hydrodynamical equation for a free particle. There is a
direct connection between the zeros of the Madelung fluid density and the
magnitude of the quantum potential. | [
0,
0,
1,
0,
0,
0
] |
Title: Unsupervised learning of phase transitions: from principal component analysis to variational autoencoders,
Abstract: We employ unsupervised machine learning techniques to learn latent parameters
which best describe states of the two-dimensional Ising model and the
three-dimensional XY model. These methods range from principal component
analysis to artificial neural network based variational autoencoders. The
states are sampled using a Monte-Carlo simulation above and below the critical
temperature. We find that the predicted latent parameters correspond to the
known order parameters. The latent representation of the states of the models
in question are clustered, which makes it possible to identify phases without
prior knowledge of their existence or the underlying Hamiltonian. Furthermore,
we find that the reconstruction loss function can be used as a universal
identifier for phase transitions. | [
1,
0,
0,
1,
0,
0
] |
Title: A novel online scheduling protocol for energy-efficient TWDM-OLT design,
Abstract: Design of energy-efficient access networks has emerged as an important area
of research, since access networks consume $80-90\%$ of the overall Internet
power consumption. TWDM-PON is envisaged to be one of the widely accepted
future access technologies. TWDM-PON offers an additional opportunity to save
energy at the OLT along with the existing energy-efficient ONU design. In this
paper, we focus on the energy-efficient OLT design in a TWDM-PON. While most of
the conventional methods employ a minimization of the number of wavelengths, we
propose a novel approach which aims at minimizing the number of voids created
due to scheduling. In the process, for the first time, we present a
low-complexity on-line scheduling algorithm for the upstream traffic
considering delay constraints. Our extensive simulations demonstrate a
significant improvement in energy efficiency of $\sim 25\%$ for high load at
the OLT receivers. Furthermore, we provide an analytical upper-bound on the
energy-efficiency of the OLT receivers and demonstrate that the proposed
protocol achieves an energy efficiency very close to the bound with a maximum
deviation $\sim 2\%$ for $64$ ONUs. | [
1,
0,
0,
0,
0,
0
] |
Title: Machines and Algorithms,
Abstract: I discuss the evolution of computer architectures with a focus on QCD and
with reference to the interplay between architecture, engineering, data motion
and algorithms. New architectures are discussed and recent performance results
are displayed. I also review recent progress in multilevel solver and
integation algorithms. | [
1,
1,
0,
0,
0,
0
] |
Title: Large Margin Learning in Set to Set Similarity Comparison for Person Re-identification,
Abstract: Person re-identification (Re-ID) aims at matching images of the same person
across disjoint camera views, which is a challenging problem in multimedia
analysis, multimedia editing and content-based media retrieval communities. The
major challenge lies in how to preserve similarity of the same person across
video footages with large appearance variations, while discriminating different
individuals. To address this problem, conventional methods usually consider the
pairwise similarity between persons by only measuring the point to point (P2P)
distance. In this paper, we propose to use deep learning technique to model a
novel set to set (S2S) distance, in which the underline objective focuses on
preserving the compactness of intra-class samples for each camera view, while
maximizing the margin between the intra-class set and inter-class set. The S2S
distance metric is consisted of three terms, namely the class-identity term,
the relative distance term and the regularization term. The class-identity term
keeps the intra-class samples within each camera view gathering together, the
relative distance term maximizes the distance between the intra-class class set
and inter-class set across different camera views, and the regularization term
smoothness the parameters of deep convolutional neural network (CNN). As a
result, the final learned deep model can effectively find out the matched
target to the probe object among various candidates in the video gallery by
learning discriminative and stable feature representations. Using the CUHK01,
CUHK03, PRID2011 and Market1501 benchmark datasets, we extensively conducted
comparative evaluations to demonstrate the advantages of our method over the
state-of-the-art approaches. | [
1,
0,
0,
1,
0,
0
] |
Title: Scheduling Constraint Based Abstraction Refinement for Multi-Threaded Program Verification,
Abstract: Bounded model checking is among the most efficient techniques for the
automatic verification of concurrent programs. However, encoding all possible
interleavings often requires a huge and complex formula, which significantly
limits the salability. This paper proposes a novel and efficient abstraction
refinement method for multi-threaded program verification. Observing that the
huge formula is usually dominated by the exact encoding of the scheduling
constraint, this paper proposes a \tsc based abstraction refinement method,
which avoids the huge and complex encoding of BMC. In addition, to obtain an
effective refinement, we have devised two graph-based algorithms over event
order graph for counterexample validation and refinement generation, which can
always obtain a small yet effective refinement constraint. Enhanced by two
constraint-based algorithms for counterexample validation and refinement
generation, we have proved that our method is sound and complete w.r.t. the
given loop unwinding depth. Experimental results on \svcompc benchmarks
indicate that our method is promising and significantly outperforms the
existing state-of-the-art tools. | [
1,
0,
0,
0,
0,
0
] |
Title: The braid group for a quiver with superpotential,
Abstract: We survey and compare various generalizations of braid groups for quivers
with superpotential and focus on the cluster braid groups, which are introduced
in a joint work with A.~King. Our motivations come from the study of cluster
algebras, Calabi-Yau categories and Bridgeland stability conditions. | [
0,
0,
1,
0,
0,
0
] |
Title: The infinitesimal characters of discrete series for real spherical spaces,
Abstract: Let $Z=G/H$ be the homogeneous space of a real reductive group and a
unimodular real spherical subgroup, and consider the regular representation of
$G$ on $L^2(Z)$. It is shown that all representations of the discrete series,
that is, the irreducible subrepresentations of $L^2(Z)$, have infinitesimal
characters which are real and belong to a lattice. Moreover, let $K$ be a
maximal compact subgroup of $G$. Then each irreducible representation of $K$
occurs in a finite set of such discrete series representations only. Similar
results are obtained for the twisted discrete series, that is, the discrete
components of the space of square integrable sections of a line bundle, given
by a unitary character on an abelian extension of $H$. | [
0,
0,
1,
0,
0,
0
] |
Title: Coincidence point results involving a generalized class of simulation functions,
Abstract: The purpose of this work is to introduce a general class of $C_G$-simulation
functions and obtained some new coincidence and common fixed points results in
metric spaces. Some useful examples are presented to illustrate our theorems.
Results obtained in this paper extend, generalize and unify some well known
fixed and common fixed point results. | [
0,
0,
1,
0,
0,
0
] |
Title: A Simulated Cyberattack on Twitter: Assessing Partisan Vulnerability to Spear Phishing and Disinformation ahead of the 2018 U.S. Midterm Elections,
Abstract: State-sponsored "bad actors" increasingly weaponize social media platforms to
launch cyberattacks and disinformation campaigns during elections. Social media
companies, due to their rapid growth and scale, struggle to prevent the
weaponization of their platforms. This study conducts an automated spear
phishing and disinformation campaign on Twitter ahead of the 2018 United States
Midterm Elections. A fake news bot account - the @DCNewsReport - was created
and programmed to automatically send customized tweets with a "breaking news"
link to 138 Twitter users, before being restricted by Twitter.
Overall, one in five users clicked the link, which could have potentially led
to the downloading of ransomware or the theft of private information. However,
the link in this experiment was non-malicious and redirected users to a Google
Forms survey. In predicting users' likelihood to click the link on Twitter, no
statistically significant differences were observed between right-wing and
left-wing partisans, or between Web users and mobile users. The findings signal
that politically expressive Americans on Twitter, regardless of their party
preferences or the devices they use to access the platform, are at risk of
being spear phishing on social media. | [
1,
0,
0,
0,
0,
0
] |
Title: Asteroid mass estimation using Markov-chain Monte Carlo,
Abstract: Estimates for asteroid masses are based on their gravitational perturbations
on the orbits of other objects such as Mars, spacecraft, or other asteroids
and/or their satellites. In the case of asteroid-asteroid perturbations, this
leads to an inverse problem in at least 13 dimensions where the aim is to
derive the mass of the perturbing asteroid(s) and six orbital elements for both
the perturbing asteroid(s) and the test asteroid(s) based on astrometric
observations. We have developed and implemented three different mass estimation
algorithms utilizing asteroid-asteroid perturbations: the very rough 'marching'
approximation, in which the asteroids' orbital elements are not fitted, thereby
reducing the problem to a one-dimensional estimation of the mass, an
implementation of the Nelder-Mead simplex method, and most significantly, a
Markov-chain Monte Carlo (MCMC) approach. We describe each of these algorithms
with particular focus on the MCMC algorithm, and present example results using
both synthetic and real data. Our results agree with the published mass
estimates, but suggest that the published uncertainties may be misleading as a
consequence of using linearized mass-estimation methods. Finally, we discuss
remaining challenges with the algorithms as well as future plans. | [
0,
1,
0,
0,
0,
0
] |
Title: Asymptotic Properties of the Maximum Likelihood Estimator in Regime Switching Econometric Models,
Abstract: Markov regime switching models have been widely used in numerous empirical
applications in economics and finance. However, the asymptotic distribution of
the maximum likelihood estimator (MLE) has not been proven for some empirically
popular Markov regime switching models. In particular, the asymptotic
distribution of the MLE has been unknown for models in which some elements of
the transition probability matrix have the value of zero, as is commonly
assumed in empirical applications with models with more than two regimes. This
also includes models in which the regime-specific density depends on both the
current and the lagged regimes such as the seminal model of Hamilton (1989) and
switching ARCH model of Hamilton and Susmel (1994). This paper shows the
asymptotic normality of the MLE and consistency of the asymptotic covariance
matrix estimate of these models. | [
0,
0,
1,
1,
0,
0
] |
Title: Fleet management for autonomous vehicles: Online PDP under special constraints,
Abstract: The VIPAFLEET project consists in developing models and algorithms for man-
aging a fleet of Individual Public Autonomous Vehicles (VIPA). Hereby, we
consider a fleet of cars distributed at specified stations in an industrial
area to supply internal transportation, where the cars can be used in different
modes of circulation (tram mode, elevator mode, taxi mode). One goal is to
develop and implement suitable algorithms for each mode in order to satisfy all
the requests under an economic point of view by minimizing the total tour
length. The innovative idea and challenge of the project is to develop and
install a dynamic fleet management system that allows the operator to switch
between the different modes within the different periods of the day according
to the dynamic transportation demands of the users. We model the underlying
online transportation system and propose a correspond- ing fleet management
framework, to handle modes, demands and commands. We consider two modes of
circulation, tram and elevator mode, propose for each mode appropriate on- line
algorithms and evaluate their performance, both in terms of competitive
analysis and practical behavior. | [
1,
0,
0,
0,
0,
0
] |
Title: Bayesian Model Selection for Misspecified Models in Linear Regression,
Abstract: While the Bayesian Information Criterion (BIC) and Akaike Information
Criterion (AIC) are powerful tools for model selection in linear regression,
they are built on different prior assumptions and thereby apply to different
data generation scenarios. We show that in the finite-dimensional case their
respective assumptions can be unified within an augmented model-plus-noise
space and construct a prior in this space which inherits the beneficial
properties of both AIC and BIC. This allows us to adapt the BIC to be robust
against misspecified models where the signal to noise ratio is low. | [
0,
0,
0,
1,
0,
0
] |
Title: Near-optimal sample complexity for convex tensor completion,
Abstract: We analyze low rank tensor completion (TC) using noisy measurements of a
subset of the tensor. Assuming a rank-$r$, order-$d$, $N \times N \times \cdots
\times N$ tensor where $r=O(1)$, the best sampling complexity that was achieved
is $O(N^{\frac{d}{2}})$, which is obtained by solving a tensor nuclear-norm
minimization problem. However, this bound is significantly larger than the
number of free variables in a low rank tensor which is $O(dN)$. In this paper,
we show that by using an atomic-norm whose atoms are rank-$1$ sign tensors, one
can obtain a sample complexity of $O(dN)$. Moreover, we generalize the matrix
max-norm definition to tensors, which results in a max-quasi-norm (max-qnorm)
whose unit ball has small Rademacher complexity. We prove that solving a
constrained least squares estimation using either the convex atomic-norm or the
nonconvex max-qnorm results in optimal sample complexity for the problem of
low-rank tensor completion. Furthermore, we show that these bounds are nearly
minimax rate-optimal. We also provide promising numerical results for max-qnorm
constrained tensor completion, showing improved recovery results compared to
matricization and alternating least squares. | [
1,
0,
0,
1,
0,
0
] |
Title: Conditional Lower Bounds for Space/Time Tradeoffs,
Abstract: In recent years much effort has been concentrated towards achieving
polynomial time lower bounds on algorithms for solving various well-known
problems. A useful technique for showing such lower bounds is to prove them
conditionally based on well-studied hardness assumptions such as 3SUM, APSP,
SETH, etc. This line of research helps to obtain a better understanding of the
complexity inside P.
A related question asks to prove conditional space lower bounds on data
structures that are constructed to solve certain algorithmic tasks after an
initial preprocessing stage. This question received little attention in
previous research even though it has potential strong impact.
In this paper we address this question and show that surprisingly many of the
well-studied hard problems that are known to have conditional polynomial time
lower bounds are also hard when concerning space. This hardness is shown as a
tradeoff between the space consumed by the data structure and the time needed
to answer queries. The tradeoff may be either smooth or admit one or more
singularity points.
We reveal interesting connections between different space hardness
conjectures and present matching upper bounds. We also apply these hardness
conjectures to both static and dynamic problems and prove their conditional
space hardness.
We believe that this novel framework of polynomial space conjectures can play
an important role in expressing polynomial space lower bounds of many important
algorithmic problems. Moreover, it seems that it can also help in achieving a
better understanding of the hardness of their corresponding problems in terms
of time. | [
1,
0,
0,
0,
0,
0
] |
Title: Full Quantification of Left Ventricle via Deep Multitask Learning Network Respecting Intra- and Inter-Task Relatedness,
Abstract: Cardiac left ventricle (LV) quantification is among the most clinically
important tasks for identification and diagnosis of cardiac diseases, yet still
a challenge due to the high variability of cardiac structure and the complexity
of temporal dynamics. Full quantification, i.e., to simultaneously quantify all
LV indices including two areas (cavity and myocardium), six regional wall
thicknesses (RWT), three LV dimensions, and one cardiac phase, is even more
challenging since the uncertain relatedness intra and inter each type of
indices may hinder the learning procedure from better convergence and
generalization. In this paper, we propose a newly-designed multitask learning
network (FullLVNet), which is constituted by a deep convolution neural network
(CNN) for expressive feature embedding of cardiac structure; two followed
parallel recurrent neural network (RNN) modules for temporal dynamic modeling;
and four linear models for the final estimation. During the final estimation,
both intra- and inter-task relatedness are modeled to enforce improvement of
generalization: 1) respecting intra-task relatedness, group lasso is applied to
each of the regression tasks for sparse and common feature selection and
consistent prediction; 2) respecting inter-task relatedness, three phase-guided
constraints are proposed to penalize violation of the temporal behavior of the
obtained LV indices. Experiments on MR sequences of 145 subjects show that
FullLVNet achieves high accurate prediction with our intra- and inter-task
relatedness, leading to MAE of 190mm$^2$, 1.41mm, 2.68mm for average areas,
RWT, dimensions and error rate of 10.4\% for the phase classification. This
endows our method a great potential in comprehensive clinical assessment of
global, regional and dynamic cardiac function. | [
1,
0,
0,
0,
0,
0
] |
Title: On sparsity and power-law properties of graphs based on exchangeable point processes,
Abstract: This paper investigates properties of the class of graphs based on
exchangeable point processes. We provide asymptotic expressions for the number
of edges, number of nodes and degree distributions, identifying four regimes: a
dense regime, a sparse, almost dense regime, a sparse regime with power-law
behavior, and an almost extremely sparse regime. Our results allow us to derive
a consistent estimator for the scalar parameter tuning the sparsity of the
graph. We also propose a class of models within this framework where one can
separately control the local, latent structure and the global
sparsity/power-law properties of the graph. | [
0,
0,
1,
1,
0,
0
] |
Title: Information transmission on hybrid networks,
Abstract: Many real-world communication networks often have hybrid nature with both
fixed nodes and moving modes, such as the mobile phone networks mainly composed
of fixed base stations and mobile phones. In this paper, we discuss the
information transmission process on the hybrid networks with both fixed and
mobile nodes. The fixed nodes (base stations) are connected as a spatial
lattice on the plane forming the information-carrying backbone, while the
mobile nodes (users), which are the sources and destinations of information
packets, connect to their current nearest fixed nodes respectively to deliver
and receive information packets. We observe the phase transition of traffic
load in the hybrid network when the packet generation rate goes from below and
then above a critical value, which measures the network capacity of packets
delivery. We obtain the optimal speed of moving nodes leading to the maximum
network capacity. We further improve the network capacity by rewiring the fixed
nodes and by considering the current load of fixed nodes during packets
transmission. Our purpose is to optimize the network capacity of hybrid
networks from the perspective of network science, and provide some insights for
the construction of future communication infrastructures. | [
1,
1,
0,
0,
0,
0
] |
Title: Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN,
Abstract: We propose a novel technique to make neural network robust to adversarial
examples using a generative adversarial network. We alternately train both
classifier and generator networks. The generator network generates an
adversarial perturbation that can easily fool the classifier network by using a
gradient of each image. Simultaneously, the classifier network is trained to
classify correctly both original and adversarial images generated by the
generator. These procedures help the classifier network to become more robust
to adversarial perturbations. Furthermore, our adversarial training framework
efficiently reduces overfitting and outperforms other regularization methods
such as Dropout. We applied our method to supervised learning for CIFAR
datasets, and experimantal results show that our method significantly lowers
the generalization error of the network. To the best of our knowledge, this is
the first method which uses GAN to improve supervised learning. | [
1,
0,
0,
1,
0,
0
] |
Title: Bad Primes in Computational Algebraic Geometry,
Abstract: Computations over the rational numbers often suffer from intermediate
coefficient swell. One solution to this problem is to apply the given algorithm
modulo a number of primes and then lift the modular results to the rationals.
This method is guaranteed to work if we use a sufficiently large set of good
primes. In many applications, however, there is no efficient way of excluding
bad primes. In this note, we describe a technique for rational reconstruction
which will nevertheless return the correct result, provided the number of good
primes in the selected set of primes is large enough. We give a number of
illustrating examples which are implemented using the computer algebra system
Singular and the programming language Julia. We discuss applications of our
technique in computational algebraic geometry. | [
1,
0,
1,
0,
0,
0
] |
Title: Multi-Task Feature Learning for Knowledge Graph Enhanced Recommendation,
Abstract: Collaborative filtering often suffers from sparsity and cold start problems
in real recommendation scenarios, therefore, researchers and engineers usually
use side information to address the issues and improve the performance of
recommender systems. In this paper, we consider knowledge graphs as the source
of side information. We propose MKR, a Multi-task feature learning approach for
Knowledge graph enhanced Recommendation. MKR is a deep end-to-end framework
that utilizes knowledge graph embedding task to assist recommendation task. The
two tasks are associated by cross&compress units, which automatically share
latent features and learn high-order interactions between items in recommender
systems and entities in the knowledge graph. We prove that cross&compress units
have sufficient capability of polynomial approximation, and show that MKR is a
generalized framework over several representative methods of recommender
systems and multi-task learning. Through extensive experiments on real-world
datasets, we demonstrate that MKR achieves substantial gains in movie, book,
music, and news recommendation, over state-of-the-art baselines. MKR is also
shown to be able to maintain a decent performance even if user-item
interactions are sparse. | [
1,
0,
0,
1,
0,
0
] |
Title: On Consistency of Graph-based Semi-supervised Learning,
Abstract: Graph-based semi-supervised learning is one of the most popular methods in
machine learning. Some of its theoretical properties such as bounds for the
generalization error and the convergence of the graph Laplacian regularizer
have been studied in computer science and statistics literatures. However, a
fundamental statistical property, the consistency of the estimator from this
method has not been proved. In this article, we study the consistency problem
under a non-parametric framework. We prove the consistency of graph-based
learning in the case that the estimated scores are enforced to be equal to the
observed responses for the labeled data. The sample sizes of both labeled and
unlabeled data are allowed to grow in this result. When the estimated scores
are not required to be equal to the observed responses, a tuning parameter is
used to balance the loss function and the graph Laplacian regularizer. We give
a counterexample demonstrating that the estimator for this case can be
inconsistent. The theoretical findings are supported by numerical studies. | [
0,
0,
0,
1,
0,
0
] |
Title: Modeling Label Ambiguity for Neural List-Wise Learning to Rank,
Abstract: List-wise learning to rank methods are considered to be the state-of-the-art.
One of the major problems with these methods is that the ambiguous nature of
relevance labels in learning to rank data is ignored. Ambiguity of relevance
labels refers to the phenomenon that multiple documents may be assigned the
same relevance label for a given query, so that no preference order should be
learned for those documents. In this paper we propose a novel sampling
technique for computing a list-wise loss that can take into account this
ambiguity. We show the effectiveness of the proposed method by training a
3-layer deep neural network. We compare our new loss function to two strong
baselines: ListNet and ListMLE. We show that our method generalizes better and
significantly outperforms other methods on the validation and test sets. | [
1,
0,
0,
1,
0,
0
] |
Title: Whitehead torsion of inertial h-cobordisms,
Abstract: We study the Whitehead torsions of inertial h-cobordisms, and identify
various types representing a nested sequence of subsets of the Whitehead group.
A number of examples are given to show that these subsets are all different in
general. | [
0,
0,
1,
0,
0,
0
] |
Title: Propagation from Deceptive News Sources: Who Shares, How Much, How Evenly, and How Quickly?,
Abstract: As people rely on social media as their primary sources of news, the spread
of misinformation has become a significant concern. In this large-scale study
of news in social media we analyze eleven million posts and investigate
propagation behavior of users that directly interact with news accounts
identified as spreading trusted versus malicious content. Unlike previous work,
which looks at specific rumors, topics, or events, we consider all content
propagated by various news sources. Moreover, we analyze and contrast
population versus sub-population behaviour (by demographics) when spreading
misinformation, and distinguish between two types of propagation, i.e., direct
retweets and mentions. Our evaluation examines how evenly, how many, how
quickly, and which users propagate content from various types of news sources
on Twitter.
Our analysis has identified several key differences in propagation behavior
from trusted versus suspicious news sources. These include high inequity in the
diffusion rate based on the source of disinformation, with a small group of
highly active users responsible for the majority of disinformation spread
overall and within each demographic. Analysis by demographics showed that users
with lower annual income and education share more from disinformation sources
compared to their counterparts. News content is shared significantly more
quickly from trusted, conspiracy, and disinformation sources compared to
clickbait and propaganda. Older users propagate news from trusted sources more
quickly than younger users, but they share from suspicious sources after longer
delays. Finally, users who interact with clickbait and conspiracy sources are
likely to share from propaganda accounts, but not the other way around. | [
1,
0,
0,
0,
0,
0
] |
Title: Self-protected nanoscale thermometry based on spin defects in silicon carbide,
Abstract: Quantum sensors with solid state electron spins have attracted considerable
interest due to their nanoscale spatial resolution.A critical requirement is to
suppress the environment noise of the solid state spin sensor.Here we
demonstrate a nanoscale thermometer based on silicon carbide (SiC) electron
spins.We experimentally demonstrate that the performance of the spin sensor is
robust against dephasing due to a self protected machenism. The SiC thermometry
may provide a promising platform for sensing in a noisy environment ,e.g.
biological system sensing. | [
0,
1,
0,
0,
0,
0
] |
Title: Robot Composite Learning and the Nunchaku Flipping Challenge,
Abstract: Advanced motor skills are essential for robots to physically coexist with
humans. Much research on robot dynamics and control has achieved success on
hyper robot motor capabilities, but mostly through heavily case-specific
engineering. Meanwhile, in terms of robot acquiring skills in a ubiquitous
manner, robot learning from human demonstration (LfD) has achieved great
progress, but still has limitations handling dynamic skills and compound
actions. In this paper, we present a composite learning scheme which goes
beyond LfD and integrates robot learning from human definition, demonstration,
and evaluation. The method tackles advanced motor skills that require dynamic
time-critical maneuver, complex contact control, and handling partly soft
partly rigid objects. We also introduce the "nunchaku flipping challenge", an
extreme test that puts hard requirements to all these three aspects. Continued
from our previous presentations, this paper introduces the latest update of the
composite learning scheme and the physical success of the nunchaku flipping
challenge. | [
1,
0,
0,
0,
0,
0
] |
Title: Querying Best Paths in Graph Databases,
Abstract: Querying graph databases has recently received much attention. We propose a
new approach to this problem, which balances competing goals of expressive
power, language clarity and computational complexity. A distinctive feature of
our approach is the ability to express properties of minimal (e.g. shortest)
and maximal (e.g. most valuable) paths satisfying given criteria. To express
complex properties in a modular way, we introduce labelling-generating
ontologies. The resulting formalism is computationally attractive -- queries
can be answered in non-deterministic logarithmic space in the size of the
database. | [
1,
0,
0,
0,
0,
0
] |
Title: Communication Complexity of Correlated Equilibrium in Two-Player Games,
Abstract: We show a communication complexity lower bound for finding a correlated
equilibrium of a two-player game. More precisely, we define a two-player $N
\times N$ game called the 2-cycle game and show that the randomized
communication complexity of finding a 1/poly($N$)-approximate correlated
equilibrium of the 2-cycle game is $\Omega(N)$. For small approximation values,
this answers an open question of Babichenko and Rubinstein (STOC 2017). Our
lower bound is obtained via a direct reduction from the unique set disjointness
problem. | [
1,
0,
0,
0,
0,
0
] |
Title: On the computability of graph Turing machines,
Abstract: We consider graph Turing machines, a model of parallel computation on a
graph, in which each vertex is only capable of performing one of a finite
number of operations. This model of computation is a natural generalization of
several well-studied notions of computation, including ordinary Turing
machines, cellular automata, and parallel graph dynamical systems. We analyze
the power of computations that can take place in this model, both in terms of
the degrees of computability of the functions that can be computed, and the
time and space resources needed to carry out these computations. We further
show that properties of the underlying graph have significant consequences for
the power of computation thereby obtained. In particular, we show that every
arithmetically definable set can be computed by a graph Turing machine in
constant time, and that every computably enumerable Turing degree can be
computed in constant time and linear space by a graph Turing machine whose
underlying graph has finite degree. | [
1,
0,
1,
0,
0,
0
] |
Title: SCRank: Spammer and Celebrity Ranking in Directed Social Networks,
Abstract: Many online social networks allow directed edges: Alice can unilaterally add
an "edge" to Bob, typically indicating interest in Bob or Bob's content,
without Bob's permission or reciprocation. In directed social networks we
observe the rise of two distinctive classes of users: celebrities who accrue
unreciprocated incoming links, and follow spammers, who generate unreciprocated
outgoing links. Identifying users in these two classes is important for abuse
detection, user and content ranking, privacy choices, and other social network
features.
In this paper we develop SCRank, an iterative algorithm to identify such
users. We analyze SCRank both theoretically and experimentally. The
spammer-celebrity definition is not amenable to analysis using standard power
iteration, so we develop a novel potential function argument to show
convergence to an approximate equilibrium point for a class of algorithms
including SCRank. We then use experimental evaluation on a real global-scale
social network and on synthetically generated graphs to observe that the
algorithm converges quickly and consistently. Using synthetic data with
built-in ground truth, we also experimentally show that the algorithm provides
a good approximation to planted celebrities and spammers. | [
1,
0,
0,
0,
0,
0
] |
Title: Taylor series and twisting-index invariants of coupled spin-oscillators,
Abstract: About six years ago, semitoric systems on 4-dimensional manifolds were
classified by Pelayo & Vu Ngoc by means of five invariants. A standard example
of such a system is the coupled spin-oscillator on $\mathbb{S}^2 \times
\mathbb{R}^2$. Calculations of three of the five semitoric invariants of this
system (namely the number of focus-focus singularities, the generalised
semitoric polygon, and the height invariant) already appeared in the
literature, but the so-called twisting index was not yet computed and, of the
so-called Taylor series invariant, only the linear terms were known.
In the present paper, we complete the list of invariants for the coupled
spin-oscillator by calculating higher order terms of the Taylor series
invariant and by computing the twisting index. Moreover, we prove that the
Taylor series invariant has certain symmetry properties that make the even
powers in one of the variables vanish and allow us to show superintegrability
of the coupled spin-oscillator on the zero energy level. | [
0,
0,
1,
0,
0,
0
] |
Title: Optimal DoF region of the K-User MISO BC with Partial CSIT,
Abstract: We consider the $K$-User Multiple-Input-Single-Output (MISO) Broadcast
Channel (BC) where the transmitter, equipped with $M$ antennas, serves $K$
users, with $K \leq M$. The transmitter has access to a partial channel state
information of the users. This is modelled by letting the variance of the
Channel State Information at the Transmitter (CSIT) error of user $i$ scale as
$O(P^{-\alpha_i}$) for the Signal-to-Noise Ratio (SNR) $P$ and some constant
$\alpha_i \geq 0$. In this work we derive the optimal Degrees-of-Freedom (DoF)
region in such setting and we show that Rate-Splitting (RS) is the key scheme
to achieve such a region. | [
1,
0,
0,
0,
0,
0
] |
Title: Active learning of constitutive relation from mesoscopic dynamics for macroscopic modeling of non-Newtonian flows,
Abstract: We simulate complex fluids by means of an on-the-fly coupling of the bulk
rheology to the underlying microstructure dynamics. In particular, a
macroscopic continuum model of polymeric fluids is constructed without a
pre-specified constitutive relation, but instead it is actively learned from
mesoscopic simulations where the dynamics of polymer chains is explicitly
computed. To couple the macroscopic rheology of polymeric fluids and the
microscale dynamics of polymer chains, the continuum approach (based on the
finite volume method) provides the transient flow field as inputs for the
(mesoscopic) dissipative particle dynamics (DPD), and in turn DPD returns an
effective constitutive relation to close the continuum equations. In this
multiscale modeling procedure, we employ an active learning strategy based on
Gaussian process regression (GPR) to minimize the number of expensive DPD
simulations, where adaptively selected DPD simulations are performed only as
necessary. Numerical experiments are carried out for flow past a circular
cylinder of a non-Newtonian fluid, modeled at the mesoscopic level by
bead-spring chains. The results show that only five DPD simulations are
required to achieve an effective closure of the continuum equations at Reynolds
number Re=10. Furthermore, when Re is increased to 100, only one additional DPD
simulation is required for constructing an extended GPR-informed model closure.
Compared to traditional message-passing multiscale approaches, applying an
active learning scheme to multiscale modeling of non-Newtonian fluids can
significantly increase the computational efficiency. Although the method
demonstrated here obtains only a local viscosity from the mesoscopic model, it
can be extended to other multiscale models of complex fluids whose
macro-rheology is unknown. | [
0,
1,
0,
0,
0,
0
] |
Title: Collective search with finite perception: transient dynamics and search efficiency,
Abstract: Motile organisms often use finite spatial perception of their surroundings to
navigate and search their habitats. Yet standard models of search are usually
based on purely local sensory information. To model how a finite perceptual
horizon affects ecological search, we propose a framework for optimal
navigation that combines concepts from random walks and optimal control theory.
We show that, while local strategies are optimal on asymptotically long and
short search times, finite perception yields faster convergence and increased
search efficiency over transient time scales relevant in biological systems.
The benefit of the finite horizon can be maintained by the searchers tuning
their response sensitivity to the length scale of the stimulant in the
environment, and is enhanced when the agents interact as a result of increased
consensus within subpopulations. Our framework sheds light on the role of
spatial perception and transients in search movement and collective sensing of
the environment. | [
0,
0,
0,
0,
1,
0
] |
Title: The first moment of cusp form L-functions in weight aspect on average,
Abstract: We study the asymptotic behaviour of the twisted first moment of central
$L$-values associated to cusp forms in weight aspect on average. Our estimate
of the error term allows extending the logarithmic length of mollifier $\Delta$
up to 2. The best previously known result, due to Iwaniec and Sarnak, was
$\Delta<1$. The proof is based on a representation formula for the error in
terms of Legendre polynomials. | [
0,
0,
1,
0,
0,
0
] |
Title: Analyzing Boltzmann Samplers for Bose-Einstein Condensates with Dirichlet Generating Functions,
Abstract: Boltzmann sampling is commonly used to uniformly sample objects of a
particular size from large combinatorial sets. For this technique to be
effective, one needs to prove that (1) the sampling procedure is efficient and
(2) objects of the desired size are generated with sufficiently high
probability. We use this approach to give a provably efficient sampling
algorithm for a class of weighted integer partitions related to Bose-Einstein
condensation from statistical physics. Our sampling algorithm is a
probabilistic interpretation of the ordinary generating function for these
objects, derived from the symbolic method of analytic combinatorics. Using the
Khintchine-Meinardus probabilistic method to bound the rejection rate of our
Boltzmann sampler through singularity analysis of Dirichlet generating
functions, we offer an alternative approach to analyze Boltzmann samplers for
objects with multiplicative structure. | [
1,
0,
0,
0,
0,
0
] |
Title: On Invariant Random Subgroups of Block-Diagonal Limits of Symmetric Groups,
Abstract: We classify the ergodic invariant random subgroups of block-diagonal limits
of symmetric groups in the cases when the groups are simple and the associated
dimension groups have finite dimensional state spaces. These block-diagonal
limits arise as the transformation groups (full groups) of Bratteli diagrams
that preserve the cofinality of infinite paths in the diagram. Given a simple
full group $G$ admitting only a finite number of ergodic measures on the
path-space $X$ of the associated Bratteli digram, we prove that every non-Dirac
ergodic invariant random subgroup of $G$ arises as the stabilizer distribution
of the diagonal action on $X^n$ for some $n\geq 1$. As a corollary, we
establish that every group character $\chi$ of $G$ has the form $\chi(g) =
Prob(g\in K)$, where $K$ is a conjugation-invariant random subgroup of $G$. | [
0,
0,
1,
0,
0,
0
] |
Title: Theoretical Analysis of Sparse Subspace Clustering with Missing Entries,
Abstract: Sparse Subspace Clustering (SSC) is a popular unsupervised machine learning
method for clustering data lying close to an unknown union of low-dimensional
linear subspaces; a problem with numerous applications in pattern recognition
and computer vision. Even though the behavior of SSC for complete data is by
now well-understood, little is known about its theoretical properties when
applied to data with missing entries. In this paper we give theoretical
guarantees for SSC with incomplete data, and analytically establish that
projecting the zero-filled data onto the observation pattern of the point being
expressed leads to a substantial improvement in performance. The main insight
that stems from our analysis is that even though the projection induces
additional missing entries, this is counterbalanced by the fact that the
projected and zero-filled data are in effect incomplete points associated with
the union of the corresponding projected subspaces, with respect to which the
point being expressed is complete. The significance of this phenomenon
potentially extends to the entire class of self-expressive methods. | [
0,
0,
0,
1,
0,
0
] |
Title: On certain geometric properties in Banach spaces of vector-valued functions,
Abstract: We consider a certain type of geometric properties of Banach spaces, which
includes for instance octahedrality, almost squareness, lushness and the
Daugavet property. For this type of properties, we obtain a general reduction
theorem, which, roughly speaking, states the following: if the property in
question is stable under certain finite absolute sums (for example finite
$\ell^p$-sums), then it is also stable under the formation of corresponding
Köthe-Bochner spaces (for example $L^p$-Bochner spaces). From this general
theorem, we obtain as corollaries a number of new results as well as some
alternative proofs of already known results concerning octahedral and almost
square spaces and their relatives, diameter-two-properties, lush spaces and
other classes. | [
0,
0,
1,
0,
0,
0
] |
Title: Social Media Analysis based on Semanticity of Streaming and Batch Data,
Abstract: Languages shared by people differ in different regions based on their
accents, pronunciation and word usages. In this era sharing of language takes
place mainly through social media and blogs. Every second swing of such a micro
posts exist which induces the need of processing those micro posts, in-order to
extract knowledge out of it. Knowledge extraction differs with respect to the
application in which the research on cognitive science fed the necessities for
the same. This work further moves forward such a research by extracting
semantic information of streaming and batch data in applications like Named
Entity Recognition and Author Profiling. In the case of Named Entity
Recognition context of a single micro post has been utilized and context that
lies in the pool of micro posts were utilized to identify the sociolect aspects
of the author of those micro posts. In this work Conditional Random Field has
been utilized to do the entity recognition and a novel approach has been
proposed to find the sociolect aspects of the author (Gender, Age group). | [
1,
0,
0,
0,
0,
0
] |
Title: Chiral Topological Superconductors Enhanced by Long-Range Interactions,
Abstract: We study the phase diagram and edge states of a two-dimensional p-wave
superconductor with long-range hopping and pairing amplitudes. New topological
phases and quasiparticles different from the usual short-range model are
obtained. When both hopping and pairing terms decay with the same exponent, one
of the topological chiral phases with propagating Majorana edge states gets
significantly enhanced by long-range couplings. On the other hand, when the
long-range pairing amplitude decays more slowly than the hopping, we discover
new topological phases where propagating Majorana fermions at each edge pair
nonlocally and become gapped even in the thermodynamic limit. Remarkably, these
nonlocal edge states are still robust, remain separated from the bulk, and are
localized at both edges at the same time. The inclusion of long-range effects
is potentially applicable to recent experiments with magnetic impurities and
islands in 2D superconductors. | [
0,
1,
0,
0,
0,
0
] |
Title: An Algebraic Glimpse at Bunched Implications and Separation Logic,
Abstract: We overview the logic of Bunched Implications (BI) and Separation Logic (SL)
from a perspective inspired by Hiroakira Ono's algebraic approach to
substructural logics. We propose generalized BI algebras (GBI-algebras) as a
common framework for algebras arising via "declarative resource reading",
intuitionistic generalizations of relation algebras and arrow logics and the
distributive Lambek calculus with intuitionistic implication. Apart from
existing models of BI (in particular, heap models and effect algebras), we also
cover models arising from weakening relations, formal languages or more
fine-grained treatment of labelled trees and semistructured data. After briefly
discussing the lattice of subvarieties of GBI, we present a suitable duality
for GBI along the lines of Esakia and Priestley and an algebraic proof of cut
elimination in the setting of residuated frames of Galatos and Jipsen. We also
show how the algebraic approach allows generic results on decidability, both
positive and negative ones. In the final part of the paper, we gently introduce
the substructural audience to some theory behind state-of-art tools,
culminating with an algebraic and proof-theoretic presentation of
(bi-)abduction. | [
1,
0,
0,
0,
0,
0
] |
Title: The adapted hyper-Kähler structure on the crown domain,
Abstract: Let $\,\Xi\,$ be the crown domain associated with a non-compact irreducible
hermitian symmetric space $\,G/K$. We give an explicit description of the
unique $\,G$-invariant adapted hyper-Kähler structure on $\,\Xi$,$\
$i.$\,$e.$\ $compatible with the adapted complex structure $\,J_{ad}\,$ and
with the $\,G$-invariant Kähler structure of $\,G/K$. We also compute
invariant potentials of the involved Kähler metrics and the associated moment
maps. | [
0,
0,
1,
0,
0,
0
] |
Title: Exact traveling wave solutions of 1D model of cancer invasion,
Abstract: In this paper we consider the continuous mathematical model of tumour growth
and invasion based on the model introduced by Anderson, Chaplain et al.
\cite{Anderson&Chaplain2000}, for the case of one space dimension. The model
consists of a system of three coupled nonlinear reaction-diffusion-taxis
partial differential equations describing the interactions between cancer
cells, the matrix degrading enzyme and the tissue. For this model under certain
conditions on the model parameters we obtain the exact analytical solutions in
terms of traveling wave variables. These solutions are smooth positive definite
functions whose profiles agree with those obtained from numerical computations
\cite{Chaplain&Lolas2006} for not very large time intervals. | [
0,
0,
0,
0,
1,
0
] |
Title: On purity theorem of Lusztig's perverse sheaves,
Abstract: Let $Q$ be a finite quiver without loops and $\mathcal{Q}_{\alpha}$ be the
Lusztig category for any dimension vector $\alpha$. The purpose of this paper
is to prove that all Frobenius eigenvalues of the $i$-th cohomology
$\mathcal{H}^i(\mathcal{L})|_x$ for a simple perverse sheaf $\mathcal{L}\in
\mathcal{Q}_{\alpha}$ and $x\in
\mathbb{E}_{\alpha}^{F^n}=\mathbb{E}_{\alpha}(\mathbb{F}_{q^n})$ are equal to
$(\sqrt{q^n})^{i}$ as a conjecture given by Schiffmann (\cite{Schiffmann2}). As
an application, we prove the existence of a class of Hall polynomials. | [
0,
0,
1,
0,
0,
0
] |
Title: Abstract Interpretation using a Language of Symbolic Approximation,
Abstract: The traditional abstract domain framework for imperative programs suffers
from several shortcomings; in particular it does not allow precise symbolic
abstractions. To solve these problems, we propose a new abstract interpretation
framework, based on symbolic expressions used both as an abstraction of the
program, and as the input analyzed by abstract domains. We demonstrate new
applications of the frame- work: an abstract domain that efficiently propagates
constraints across the whole program; a new formalization of functor domains as
approximate translation, which allows the production of approximate programs,
on which we can perform classical symbolic techniques. We used these to build a
complete analyzer for embedded C programs, that demonstrates the practical
applicability of the framework. | [
1,
0,
0,
0,
0,
0
] |
Title: The Integral Transform of N.I.Akhiezer,
Abstract: We study the integral transform which appeared in a different form in
Akhiezer's textbook "Lectures on Integral Transforms". | [
0,
0,
1,
0,
0,
0
] |
Title: Quantitative CBA: Small and Comprehensible Association Rule Classification Models,
Abstract: Quantitative CBA is a postprocessing algorithm for association rule
classification algorithm CBA (Liu et al, 1998). QCBA uses original,
undiscretized numerical attributes to optimize the discovered association
rules, refining the boundaries of literals in the antecedent of the rules
produced by CBA. Some rules as well as literals from the rules can consequently
be removed, which makes the resulting classifier smaller. One-rule
classification and crisp rules make CBA classification models possibly most
comprehensible among all association rule classification algorithms. These
viable properties are retained by QCBA. The postprocessing is conceptually
fast, because it is performed on a relatively small number of rules that passed
data coverage pruning in CBA. Benchmark of our QCBA approach on 22 UCI datasets
shows average 53% decrease in the total size of the model as measured by the
total number of conditions in all rules. Model accuracy remains on the same
level as for CBA. | [
1,
0,
0,
1,
0,
0
] |
Title: Quermassintegral preserving curvature flow in Hyperbolic space,
Abstract: We consider the quermassintegral preserving flow of closed \emph{h-convex}
hypersurfaces in hyperbolic space with the speed given by any positive power of
a smooth symmetric, strictly increasing, and homogeneous of degree one function
$f$ of the principal curvatures which is inverse concave and has dual $f_*$
approaching zero on the boundary of the positive cone. We prove that if the
initial hypersurface is \emph{h-convex}, then the solution of the flow becomes
strictly \emph{h-convex} for $t>0$, the flow exists for all time and converges
to a geodesic sphere exponentially in the smooth topology. | [
0,
0,
1,
0,
0,
0
] |
Title: Cloaking for a quasi-linear elliptic partial differential equation,
Abstract: In this article we consider cloaking for a quasi-linear elliptic partial
differential equation of divergence type defined on a bounded domain in
$\mathbb{R}^N$ for $N=2,3$. We show that a perfect cloak can be obtained via a
singular change of variables scheme and an approximate cloak can be achieved
via a regular change of variables scheme. These approximate cloaks though
non-degenerate are anisotropic. We also show, within the framework of
homogenization, that it is possible to get isotropic regular approximate
cloaks. This work generalizes to quasi-linear settings previous work on
cloaking in the context of Electrical Impedance Tomography for the conductivity
equation. | [
0,
0,
1,
0,
0,
0
] |
Title: Good Arm Identification via Bandit Feedback,
Abstract: We consider a novel stochastic multi-armed bandit problem called {\em good
arm identification} (GAI), where a good arm is defined as an arm with expected
reward greater than or equal to a given threshold. GAI is a pure-exploration
problem that a single agent repeats a process of outputting an arm as soon as
it is identified as a good one before confirming the other arms are actually
not good. The objective of GAI is to minimize the number of samples for each
process. We find that GAI faces a new kind of dilemma, the {\em
exploration-exploitation dilemma of confidence}, which is different difficulty
from the best arm identification. As a result, an efficient design of
algorithms for GAI is quite different from that for the best arm
identification. We derive a lower bound on the sample complexity of GAI that is
tight up to the logarithmic factor $\mathrm{O}(\log \frac{1}{\delta})$ for
acceptance error rate $\delta$. We also develop an algorithm whose sample
complexity almost matches the lower bound. We also confirm experimentally that
our proposed algorithm outperforms naive algorithms in synthetic settings based
on a conventional bandit problem and clinical trial researches for rheumatoid
arthritis. | [
0,
0,
0,
1,
0,
0
] |
Title: Learning compressed representations of blood samples time series with missing data,
Abstract: Clinical measurements collected over time are naturally represented as
multivariate time series (MTS), which often contain missing data. An
autoencoder can learn low dimensional vectorial representations of MTS that
preserve important data characteristics, but cannot deal explicitly with
missing data. In this work, we propose a new framework that combines an
autoencoder with the Time series Cluster Kernel (TCK), a kernel that accounts
for missingness patterns in MTS. Via kernel alignment, we incorporate TCK in
the autoencoder to improve the learned representations in presence of missing
data. We consider a classification problem of MTS with missing values,
representing blood samples of patients with surgical site infection. With our
approach, rather than with a standard autoencoder, we learn representations in
low dimensions that can be classified better. | [
1,
0,
0,
1,
0,
0
] |
Title: Fuzzy Galois connections on fuzzy sets,
Abstract: In fairly elementary terms this paper presents how the theory of preordered
fuzzy sets, more precisely quantale-valued preorders on quantale-valued fuzzy
sets, is established under the guidance of enriched category theory. Motivated
by several key results from the theory of quantaloid-enriched categories, this
paper develops all needed ingredients purely in order-theoretic languages for
the readership of fuzzy set theorists, with particular attention paid to fuzzy
Galois connections between preordered fuzzy sets. | [
1,
0,
0,
0,
0,
0
] |
Title: QWIRE Practice: Formal Verification of Quantum Circuits in Coq,
Abstract: We describe an embedding of the QWIRE quantum circuit language in the Coq
proof assistant. This allows programmers to write quantum circuits using
high-level abstractions and to prove properties of those circuits using Coq's
theorem proving features. The implementation uses higher-order abstract syntax
to represent variable binding and provides a type-checking algorithm for linear
wire types, ensuring that quantum circuits are well-formed. We formalize a
denotational semantics that interprets QWIRE circuits as superoperators on
density matrices, and prove the correctness of some simple quantum programs. | [
1,
0,
0,
0,
0,
0
] |
Title: The cost of fairness in classification,
Abstract: We study the problem of learning classifiers with a fairness constraint, with
three main contributions towards the goal of quantifying the problem's inherent
tradeoffs. First, we relate two existing fairness measures to cost-sensitive
risks. Second, we show that for cost-sensitive classification and fairness
measures, the optimal classifier is an instance-dependent thresholding of the
class-probability function. Third, we show how the tradeoff between accuracy
and fairness is determined by the alignment between the class-probabilities for
the target and sensitive features. Underpinning our analysis is a general
framework that casts the problem of learning with a fairness requirement as one
of minimising the difference of two statistical risks. | [
1,
0,
0,
0,
0,
0
] |
Title: The algebraic structure of cut Feynman integrals and the diagrammatic coaction,
Abstract: We study the algebraic and analytic structure of Feynman integrals by
proposing an operation that maps an integral into pairs of integrals obtained
from a master integrand and a corresponding master contour. This operation is a
coaction. It reduces to the known coaction on multiple polylogarithms, but
applies more generally, e.g. to hypergeometric functions. The coaction also
applies to generic one-loop Feynman integrals with any configuration of
internal and external masses, and in dimensional regularization. In this case,
we demonstrate that it can be given a diagrammatic representation purely in
terms of operations on graphs, namely contractions and cuts of edges. The
coaction gives direct access to (iterated) discontinuities of Feynman integrals
and facilitates a straightforward derivation of the differential equations they
admit. In particular, the differential equations for any one-loop integral are
determined by the diagrammatic coaction using limited information about their
maximal, next-to-maximal, and next-to-next-to-maximal cuts. | [
0,
0,
1,
0,
0,
0
] |
Title: Implementing universal nonadiabatic holonomic quantum gates with transmons,
Abstract: Geometric phases are well known to be noise-resilient in quantum
evolutions/operations. Holonomic quantum gates provide us with a robust way
towards universal quantum computation, as these quantum gates are actually
induced by nonabelian geometric phases. Here we propose and elaborate how to
efficiently implement universal nonadiabatic holonomic quantum gates on simpler
superconducting circuits, with a single transmon serving as a qubit. In our
proposal, an arbitrary single-qubit holonomic gate can be realized in a
single-loop scenario, by varying the amplitudes and phase difference of two
microwave fields resonantly coupled to a transmon, while nontrivial two-qubit
holonomic gates may be generated with a transmission-line resonator being
simultaneously coupled to the two target transmons in an effective resonant
way. Moreover, our scenario may readily be scaled up to a two-dimensional
lattice configuration, which is able to support large scalable quantum
computation, paving the way for practically implementing universal nonadiabatic
holonomic quantum computation with superconducting circuits. | [
0,
1,
0,
0,
0,
0
] |
Title: Fine cophasing of segmented aperture telescopes with ZELDA, a Zernike wavefront sensor in the diffraction-limited regime,
Abstract: Segmented aperture telescopes require an alignment procedure with successive
steps from coarse alignment to monitoring process in order to provide very high
optical quality images for stringent science operations such as exoplanet
imaging. The final step, referred to as fine phasing, calls for a high
sensitivity wavefront sensing and control system in a diffraction-limited
regime to achieve segment alignment with nanometric accuracy. In this context,
Zernike wavefront sensors represent promising options for such a calibration. A
concept called the Zernike unit for segment phasing (ZEUS) was previously
developed for ground-based applications to operate under seeing-limited images.
Such a concept is, however, not suitable for fine cophasing with
diffraction-limited images. We revisit ZELDA, a Zernike sensor that was
developed for the measurement of residual aberrations in exoplanet direct
imagers, to measure segment piston, tip, and tilt in the diffraction-limited
regime. We introduce a novel analysis scheme of the sensor signal that relies
on piston, tip, and tilt estimators for each segment, and provide probabilistic
insights to predict the success of a closed-loop correction as a function of
the initial wavefront error. The sensor unambiguously and simultaneously
retrieves segment piston and tip-tilt misalignment. Our scheme allows for
correction of these errors in closed-loop operation down to nearly zero
residuals in a few iterations. This sensor also shows low sensitivity to
misalignment of its parts and high ability for operation with a relatively
bright natural guide star. Our cophasing sensor relies on existing mask
technologies that make the concept already available for segmented apertures in
future space missions. | [
0,
1,
0,
0,
0,
0
] |
Title: Why Pay More When You Can Pay Less: A Joint Learning Framework for Active Feature Acquisition and Classification,
Abstract: We consider the problem of active feature acquisition, where we sequentially
select the subset of features in order to achieve the maximum prediction
performance in the most cost-effective way. In this work, we formulate this
active feature acquisition problem as a reinforcement learning problem, and
provide a novel framework for jointly learning both the RL agent and the
classifier (environment). We also introduce a more systematic way of encoding
subsets of features that can properly handle innate challenge with missing
entries in active feature acquisition problems, that uses the orderless
LSTM-based set encoding mechanism that readily fits in the joint learning
framework. We evaluate our model on a carefully designed synthetic dataset for
the active feature acquisition as well as several real datasets such as
electric health record (EHR) datasets, on which it outperforms all baselines in
terms of prediction performance as well feature acquisition cost. | [
1,
0,
0,
1,
0,
0
] |
Title: The Hasse Norm Principle For Biquadratic Extensions,
Abstract: We give an asymptotic formula for the number of biquadratic extensions of the
rationals of bounded discriminant that fail the Hasse norm principle. | [
0,
0,
1,
0,
0,
0
] |
Title: Static vs Adaptive Strategies for Optimal Execution with Signals,
Abstract: We consider an optimal execution problem in which a trader is looking at a
short-term price predictive signal while trading. In the case where the trader
is creating an instantaneous market impact, we show that transactions costs
resulting from the optimal adaptive strategy are substantially lower than the
corresponding costs of the optimal static strategy. Later, we investigate the
case where the trader is creating transient market impact. We show that
strategies in which the trader is observing the signal a number of times during
the trading period, can dramatically reduce the transaction costs and improve
the performance of the optimal static strategy. These results answer a question
which was raised by Brigo and Piat [6], by analyzing two cases where adaptive
strategies can improve the performance of the execution. | [
0,
0,
0,
0,
0,
1
] |
Title: New descriptions of the weighted Reed-Muller codes and the homogeneous Reed-Muller codes,
Abstract: We give a description of the weighted Reed-Muller codes over a prime field in
a modular algebra. A description of the homogeneous Reed-Muller codes in the
same ambient space is presented for the binary case. A decoding procedure using
the Landrock-Manz method is developed. | [
1,
0,
1,
0,
0,
0
] |
Title: Criteria for the Absence and Existence of Bounded Solutions at the Threshold Frequency in a Junction of Quantum Waveguides,
Abstract: In the junction $\Omega$ of several semi-infinite cylindrical waveguides we
consider the Dirichlet Laplacian whose continuous spectrum is the ray
$[\lambda_\dagger, +\infty)$ with a positive cut-off value $\lambda_\dagger$.
We give two different criteria for the threshold resonance generated by
nontrivial bounded solutions to the Dirichlet problem for the Helmholtz
equation $-\Delta u=\lambda_\dagger u$ in $\Omega$. The first criterion is
quite simple and is convenient to disprove the existence of bounded solutions.
The second criterion is rather involved but can help to detect concrete shapes
supporting the resonance. Moreover, the latter distinguishes in a natural way
between stabilizing, i.e., bounded but non-descending solutions and trapped
modes with exponential decay at infinity. | [
0,
0,
1,
0,
0,
0
] |
Title: Stochastic Feedback Control of Systems with Unknown Nonlinear Dynamics,
Abstract: This paper studies the stochastic optimal control problem for systems with
unknown dynamics. First, an open-loop deterministic trajectory optimization
problem is solved without knowing the explicit form of the dynamical system.
Next, a Linear Quadratic Gaussian (LQG) controller is designed for the nominal
trajectory-dependent linearized system, such that under a small noise
assumption, the actual states remain close to the optimal trajectory. The
trajectory-dependent linearized system is identified using input-output
experimental data consisting of the impulse responses of the nominal system. A
computational example is given to illustrate the performance of the proposed
approach. | [
1,
0,
0,
0,
0,
0
] |
Title: Micrometer-Sized Water Ice Particles for Planetary Science Experiments: Influence of Surface Structure on Collisional Properties,
Abstract: Models and observations suggest that ice-particle aggregation at and beyond
the snowline dominates the earliest stages of planet-formation, which therefore
is subject to many laboratory studies. However, the pressure-temperature
gradients in proto-planetary disks mean that the ices are constantly processed,
undergoing phase changes between different solid phases and the gas phase. Open
questions remain as to whether the properties of the icy particles themselves
dictate collision outcomes and therefore how effectively collision experiments
reproduce conditions in pro- toplanetary environments. Previous experiments
often yielded apparently contradictory results on collision outcomes, only
agreeing in a temperature dependence setting in above $\approx$ 210 K. By
exploiting the unique capabilities of the NIMROD neutron scattering instrument,
we characterized the bulk and surface structure of icy particles used in
collision experiments, and studied how these structures alter as a function of
temperature at a constant pressure of around 30 mbar. Our icy grains, formed
under liquid nitrogen, undergo changes in the crystalline ice-phase,
sublimation, sintering and surface pre-melting as they are heated from 103 to
247 K. An increase in the thickness of the diffuse surface layer from $\approx$
10 to $\approx$ 30 {\AA} ($\approx$ 2.5 to 12 bilayers) proves increased
molecular mobility at temperatures above $\approx$ 210 K. As none of the other
changes tie-in with the temperature trends in collisional outcomes, we conclude
that the surface pre-melting phenomenon plays a key role in collision
experiments at these temperatures. Consequently, the pressure-temperature
environment, may have a larger influence on collision outcomes than previously
thought. | [
0,
1,
0,
0,
0,
0
] |
Title: The splashback radius of halos from particle dynamics. II. Dependence on mass, accretion rate, redshift, and cosmology,
Abstract: The splashback radius $R_{\rm sp}$, the apocentric radius of particles on
their first orbit after falling into a dark matter halo, has recently been
suggested as a physically motivated halo boundary that separates accreting from
orbiting material. Using the SPARTA code presented in Paper I, we analyze the
orbits of billions of particles in cosmological simulations of structure
formation and measure $R_{\rm sp}$ for a large sample of halos that span a mass
range from dwarf galaxy to massive cluster halos, reach redshift 8, and include
WMAP, Planck, and self-similar cosmologies. We analyze the dependence of
$R_{\rm sp}/R_{\rm 200m}$ and $M_{\rm sp}/M_{\rm 200m}$ on the mass accretion
rate $\Gamma$, halo mass, redshift, and cosmology. The scatter in these
relations varies between 0.02 and 0.1 dex. While we confirm the known trend
that $R_{\rm sp}/R_{\rm 200m}$ decreases with $\Gamma$, the relationships turn
out to be more complex than previously thought, demonstrating that $R_{\rm sp}$
is an independent definition of the halo boundary that cannot trivially be
reconstructed from spherical overdensity definitions. We present fitting
functions for $R_{\rm sp}/R_{\rm 200m}$ and $M_{\rm sp}/M_{\rm 200m}$ as a
function of accretion rate, peak height, and redshift, achieving an accuracy of
5% or better everywhere in the parameter space explored. We discuss the
physical meaning of the distribution of particle apocenters and show that the
previously proposed definition of $R_{\rm sp}$ as the radius of the steepest
logarithmic density slope encloses roughly three-quarters of the apocenters.
Finally, we conclude that no analytical model presented thus far can fully
explain our results. | [
0,
1,
0,
0,
0,
0
] |
Title: Empirical Evaluation of Parallel Training Algorithms on Acoustic Modeling,
Abstract: Deep learning models (DLMs) are state-of-the-art techniques in speech
recognition. However, training good DLMs can be time consuming especially for
production-size models and corpora. Although several parallel training
algorithms have been proposed to improve training efficiency, there is no clear
guidance on which one to choose for the task in hand due to lack of systematic
and fair comparison among them. In this paper we aim at filling this gap by
comparing four popular parallel training algorithms in speech recognition,
namely asynchronous stochastic gradient descent (ASGD), blockwise model-update
filtering (BMUF), bulk synchronous parallel (BSP) and elastic averaging
stochastic gradient descent (EASGD), on 1000-hour LibriSpeech corpora using
feed-forward deep neural networks (DNNs) and convolutional, long short-term
memory, DNNs (CLDNNs). Based on our experiments, we recommend using BMUF as the
top choice to train acoustic models since it is most stable, scales well with
number of GPUs, can achieve reproducible results, and in many cases even
outperforms single-GPU SGD. ASGD can be used as a substitute in some cases. | [
1,
0,
0,
0,
0,
0
] |
Title: Distributed Impedance Control of Latency-Prone Robotic Systems with Series Elastic Actuation,
Abstract: Robotic systems are increasingly relying on distributed feedback controllers
to tackle complex and latency-prone sensing and decision problems. These
demands come at the cost of a growing computational burden and, as a result,
larger controller latencies. To maximize robustness to mechanical disturbances
and achieve high control performance, we emphasize the necessity for executing
damping feedback in close proximity to the control plant while allocating
stiffness feedback in a latency-prone centralized control process.
Additionally, series elastic actuators (SEAs) are becoming prevalent in
torque-controlled robots during recent years to achieve compliant interactions
with environments and humans. However, designing optimal impedance controllers
and characterizing impedance performance for SEAs with time delays and
filtering are still under-explored problems. The presented study addresses the
optimal controller design problem by devising a critically-damped gain design
method for a class of SEA cascaded control architectures, which is composed of
outer-impedance and inner-torque feedback loops. Via the proposed controller
design criterion, we adopt frequency-domain methods to thoroughly analyze the
effects of time delays, filtering and load inertia on SEA impedance
performance. These results are further validated through the analysis,
simulation, and experimental testing on high-performance actuators and on an
omnidirectional mobile base. | [
1,
0,
0,
0,
0,
0
] |
Title: Simulation Methods for Stochastic Storage Problems: A Statistical Learning Perspective,
Abstract: We consider solution of stochastic storage problems through regression Monte
Carlo (RMC) methods. Taking a statistical learning perspective, we develop the
dynamic emulation algorithm (DEA) that unifies the different existing
approaches in a single modular template. We then investigate the two central
aspects of regression architecture and experimental design that constitute DEA.
For the regression piece, we discuss various non-parametric approaches, in
particular introducing the use of Gaussian process regression in the context of
stochastic storage. For simulation design, we compare the performance of
traditional design (grid discretization), against space-filling, and several
adaptive alternatives. The overall DEA template is illustrated with multiple
examples drawing from natural gas storage valuation and optimal control of
back-up generator in a microgrid. | [
0,
0,
0,
0,
0,
1
] |
Title: Flashes of Hidden Worlds at Colliders,
Abstract: (This is a general physics level overview article about hidden sectors, and
how they motivate searches for long-lived particles. Intended for publication
in Physics Today.)
Searches for new physics at the Large Hadron Collider have so far come up
empty, but we just might not be looking in the right place. Spectacular bursts
of particles appearing seemingly out of nowhere could shed light on some of
nature's most profound mysteries. | [
0,
1,
0,
0,
0,
0
] |
Title: Espresso: Brewing Java For More Non-Volatility with Non-volatile Memory,
Abstract: Fast, byte-addressable non-volatile memory (NVM) embraces both near-DRAM
latency and disk-like persistence, which has generated considerable interests
to revolutionize system software stack and programming models. However, it is
less understood how NVM can be combined with managed runtime like Java virtual
machine (JVM) to ease persistence management. This paper proposes Espresso, a
holistic extension to Java and its runtime, to enable Java programmers to
exploit NVM for persistence management with high performance. Espresso first
provides a general persistent heap design called Persistent Java Heap (PJH) to
manage persistent data as normal Java objects. The heap is then strengthened
with a recoverable mechanism to provide crash consistency for heap metadata. It
then provides a new abstraction called Persistent Java Object (PJO) to provide
an easy-to-use but safe persistent programming model for programmers to persist
application data. The evaluation confirms that Espresso significantly
outperforms state-of-art NVM support for Java (i.e., JPA and PCJ) while being
compatible to existing data structures in Java programs. | [
1,
0,
0,
0,
0,
0
] |
Title: Multimodal Machine Learning: A Survey and Taxonomy,
Abstract: Our experience of the world is multimodal - we see objects, hear sounds, feel
texture, smell odors, and taste flavors. Modality refers to the way in which
something happens or is experienced and a research problem is characterized as
multimodal when it includes multiple such modalities. In order for Artificial
Intelligence to make progress in understanding the world around us, it needs to
be able to interpret such multimodal signals together. Multimodal machine
learning aims to build models that can process and relate information from
multiple modalities. It is a vibrant multi-disciplinary field of increasing
importance and with extraordinary potential. Instead of focusing on specific
multimodal applications, this paper surveys the recent advances in multimodal
machine learning itself and presents them in a common taxonomy. We go beyond
the typical early and late fusion categorization and identify broader
challenges that are faced by multimodal machine learning, namely:
representation, translation, alignment, fusion, and co-learning. This new
taxonomy will enable researchers to better understand the state of the field
and identify directions for future research. | [
1,
0,
0,
0,
0,
0
] |
Title: The Motivic Cofiber of $τ$,
Abstract: Consider the Tate twist $\tau \in H^{0,1}(S^{0,0})$ in the mod 2 cohomology
of the motivic sphere. After 2-completion, the motivic Adams spectral sequence
realizes this element as a map $\tau \colon S^{0,-1} \to S^{0,0}$, with cofiber
$C\tau$. We show that this motivic 2-cell complex can be endowed with a unique
$E_{\infty}$ ring structure. Moreover, this promotes the known isomorphism
$\pi_{\ast,\ast} C\tau \cong
\mathrm{Ext}^{\ast,\ast}_{BP_{\ast}BP}(BP_{\ast},BP_{\ast})$ to an isomorphism
of rings which also preserves higher products.
We then consider the closed symmetric monoidal category $({
}_{C\tau}\textbf{Mod}, - \wedge_{C\tau} -)$ which lives in the kernel of Betti
realization. Given a motivic spectrum $X$, the $C\tau$-induced spectrum $X
\wedge C\tau$ is usually better behaved and easier to understand than $X$
itself. We specifically illustrate this concept in the examples of the mod 2
Eilenberg-Maclane spectrum $H\mathbb{F}_2$, the mod 2 Moore spectrum
$S^{0,0}/2$ and the connective hermitian $K$-theory spectrum $kq$. | [
0,
0,
1,
0,
0,
0
] |
Title: EEG machine learning with Higuchi fractal dimension and Sample Entropy as features for successful detection of depression,
Abstract: Reliable diagnosis of depressive disorder is essential for both optimal
treatment and prevention of fatal outcomes. In this study, we aimed to
elucidate the effectiveness of two non-linear measures, Higuchi Fractal
Dimension (HFD) and Sample Entropy (SampEn), in detecting depressive disorders
when applied on EEG. HFD and SampEn of EEG signals were used as features for
seven machine learning algorithms including Multilayer Perceptron, Logistic
Regression, Support Vector Machines with the linear and polynomial kernel,
Decision Tree, Random Forest, and Naive Bayes classifier, discriminating EEG
between healthy control subjects and patients diagnosed with depression. We
confirmed earlier observations that both non-linear measures can discriminate
EEG signals of patients from healthy control subjects. The results suggest that
good classification is possible even with a small number of principal
components. Average accuracy among classifiers ranged from 90.24% to 97.56%.
Among the two measures, SampEn had better performance. Using HFD and SampEn and
a variety of machine learning techniques we can accurately discriminate
patients diagnosed with depression vs controls which can serve as a highly
sensitive, clinically relevant marker for the diagnosis of depressive
disorders. | [
0,
0,
0,
1,
1,
0
] |
Title: On Microtargeting Socially Divisive Ads: A Case Study of Russia-Linked Ad Campaigns on Facebook,
Abstract: Targeted advertising is meant to improve the efficiency of matching
advertisers to their customers. However, targeted advertising can also be
abused by malicious advertisers to efficiently reach people susceptible to
false stories, stoke grievances, and incite social conflict. Since targeted ads
are not seen by non-targeted and non-vulnerable people, malicious ads are
likely to go unreported and their effects undetected. This work examines a
specific case of malicious advertising, exploring the extent to which political
ads from the Russian Intelligence Research Agency (IRA) run prior to 2016 U.S.
elections exploited Facebook's targeted advertising infrastructure to
efficiently target ads on divisive or polarizing topics (e.g., immigration,
race-based policing) at vulnerable sub-populations. In particular, we do the
following: (a) We conduct U.S. census-representative surveys to characterize
how users with different political ideologies report, approve, and perceive
truth in the content of the IRA ads. Our surveys show that many ads are
"divisive": they elicit very different reactions from people belonging to
different socially salient groups. (b) We characterize how these divisive ads
are targeted to sub-populations that feel particularly aggrieved by the status
quo. Our findings support existing calls for greater transparency of content
and targeting of political ads. (c) We particularly focus on how the Facebook
ad API facilitates such targeting. We show how the enormous amount of personal
data Facebook aggregates about users and makes available to advertisers enables
such malicious targeting. | [
1,
0,
0,
0,
0,
0
] |
Title: On Helmholtz free energy for finite abstract simplicial complexes,
Abstract: We prove a Gauss-Bonnet formula X(G) = sum_x K(x), where K(x)=(-1)^dim(x)
(1-X(S(x))) is a curvature of a vertex x with unit sphere S(x) in the
Barycentric refinement G1 of a simplicial complex G. K(x) is dual to
(-1)^dim(x) for which Gauss-Bonnet is the definition of Euler characteristic X.
Because the connection Laplacian L'=1+A' of G is unimodular, where A' is the
adjacency matrix of of the connection graph G', the Green function values
g(x,y) = (1+A')^-1_xy are integers and 1-X(S(x))=g(x,x). Gauss-Bonnet for K^+
reads therefore as str(g)=X(G), where str is the super trace. As g is a
time-discrete heat kernel, this is a cousin to McKean-Singer str(exp(-Lt)) =
X(G) for the Hodge Laplacian L=dd^* +d^*d which lives on the same Hilbert space
than L'. Both formulas hold for an arbitrary finite abstract simplicial complex
G. Writing V_x(y)= g(x,y) for the Newtonian potential of the connection
Laplacian, we prove sum_y V_x(y) = K(x), so that by the new Gauss-Bonnet
formula, the Euler characteristic of G agrees with the total potential
theoretic energy sum_x,y g(x,y)=X(G) of G. The curvature K now relates to the
probability measure p minimizing the internal energy U(p)=sum_x,y g(x,y) p(x)
p(y) of the complex. Since both the internal energy (here linked to topology)
and Shannon entropy are natural and unique in classes of functionals, we then
look at critical points p the Helmholtz free energy F(p)=(1-T) U(p)-T S(p)
which combines the energy functional U and the entropy functional S(p)=-sum_x
p(x) log(p(x)). As the temperature T changes, we observe bifurcation phenomena.
Already for G=K_3 both a saddle node bifurcation and a pitchfork bifurcation
occurs. The saddle node bifurcation leads to a catastrophe: the function T ->
F(p(T),T) is discontinuous if p(T) is a free energy minimizer. | [
1,
0,
1,
0,
0,
0
] |
Title: Global Patterns of Synchronization in Human Communications,
Abstract: Social media are transforming global communication and coordination. The data
derived from social media can reveal patterns of human behavior at all levels
and scales of society. Using geolocated Twitter data, we have quantified
collective behaviors across multiple scales, ranging from the commutes of
individuals, to the daily pulse of 50 major urban areas and global patterns of
human coordination. Human activity and mobility patterns manifest the synchrony
required for contingency of actions between individuals. Urban areas show
regular cycles of contraction and expansion that resembles heartbeats linked
primarily to social rather than natural cycles. Business hours and circadian
rhythms influence daily cycles of work, recreation, and sleep. Different urban
areas have characteristic signatures of daily collective activities. The
differences are consistent with a new emergent global synchrony that couples
behavior in distant regions across the world. A globally synchronized peak that
includes exchange of ideas and information across Europe, Africa, Asia and
Australasia. We propose a dynamical model to explain the emergence of global
synchrony in the context of increasing global communication and reproduce the
observed behavior. The collective patterns we observe show how social
interactions lead to interdependence of behavior manifest in the
synchronization of communication. The creation and maintenance of temporally
sensitive social relationships results in the emergence of complexity of the
larger scale behavior of the social system. | [
1,
1,
0,
0,
0,
0
] |
Title: On the predictability of infectious disease outbreaks,
Abstract: Infectious disease outbreaks recapitulate biology: they emerge from the
multi-level interaction of hosts, pathogens, and their shared environment. As a
result, predicting when, where, and how far diseases will spread requires a
complex systems approach to modeling. Recent studies have demonstrated that
predicting different components of outbreaks--e.g., the expected number of
cases, pace and tempo of cases needing treatment, demand for prophylactic
equipment, importation probability etc.--is feasible. Therefore, advancing both
the science and practice of disease forecasting now requires testing for the
presence of fundamental limits to outbreak prediction. To investigate the
question of outbreak prediction, we study the information theoretic limits to
forecasting across a broad set of infectious diseases using permutation entropy
as a model independent measure of predictability. Studying the predictability
of a diverse collection of historical outbreaks--including, chlamydia, dengue,
gonorrhea, hepatitis A, influenza, measles, mumps, polio, and whooping
cough--we identify a fundamental entropy barrier for infectious disease time
series forecasting. However, we find that for most diseases this barrier to
prediction is often well beyond the time scale of single outbreaks. We also
find that the forecast horizon varies by disease and demonstrate that both
shifting model structures and social network heterogeneity are the most likely
mechanisms for the observed differences across contagions. Our results
highlight the importance of moving beyond time series forecasting, by embracing
dynamic modeling approaches, and suggest challenges for performing model
selection across long time series. We further anticipate that our findings will
contribute to the rapidly growing field of epidemiological forecasting and may
relate more broadly to the predictability of complex adaptive systems. | [
0,
1,
0,
0,
0,
0
] |
Title: Multi-Player Bandits Revisited,
Abstract: Multi-player Multi-Armed Bandits (MAB) have been extensively studied in the
literature, motivated by applications to Cognitive Radio systems. Driven by
such applications as well, we motivate the introduction of several levels of
feedback for multi-player MAB algorithms. Most existing work assume that
sensing information is available to the algorithm. Under this assumption, we
improve the state-of-the-art lower bound for the regret of any decentralized
algorithms and introduce two algorithms, RandTopM and MCTopM, that are shown to
empirically outperform existing algorithms. Moreover, we provide strong
theoretical guarantees for these algorithms, including a notion of asymptotic
optimality in terms of the number of selections of bad arms. We then introduce
a promising heuristic, called Selfish, that can operate without sensing
information, which is crucial for emerging applications to Internet of Things
networks. We investigate the empirical performance of this algorithm and
provide some first theoretical elements for the understanding of its behavior. | [
1,
0,
0,
1,
0,
0
] |
Title: Model Averaging for Generalized Linear Model with Covariates that are Missing completely at Random,
Abstract: In this paper, we consider the estimation of generalized linear models with
covariates that are missing completely at random. We propose a model averaging
estimation method and prove that the corresponding model averaging estimator is
asymptotically optimal under certain assumptions. Simulaiton results illustrate
that this method has better performance than other alternatives under most
situations. | [
0,
0,
1,
1,
0,
0
] |
Title: Demonstration of the length stability requirements for ALPS II with a high finesse 9.2m cavity,
Abstract: Light-shining-through-a-wall experiments represent a new experimental
approach in the search for undiscovered elementary particles not accessible
with accelerator based experiments. The next generation of these experiments,
such as ALPS~II, require high finesse, long baseline optical cavities with fast
length control. In this paper we report on a length stabilization control loop
used to keep a 9.2\,m cavity resonant. It achieves a unity-gain-frequency of
4\,kHz and actuates on a mirror with a diameter of 50.8\,mm. The finesse of
this cavity was measured to be 101,304$\pm$540 for 1064\,nm light. The
differential cavity length noise between 1064\,nm and 532\,nm light was also
measured since 532\,nm light will be used to sense the length of the
regeneration cavity. Out-of-loop noise sources and different control strategies
are discussed, in order to fulfill the length stability requirements for
ALPS~II. | [
0,
1,
0,
0,
0,
0
] |
Title: Shrub-depth: Capturing Height of Dense Graphs,
Abstract: The recent increase of interest in the graph invariant called tree-depth and
in its applications in algorithms and logic on graphs led to a natural
question: is there an analogously useful "depth" notion also for dense graphs
(say; one which is stable under graph complementation)? To this end, in a 2012
conference paper, a new notion of shrub-depth has been introduced, such that it
is related to the established notion of clique-width in a similar way as
tree-depth is related to tree-width. Since then shrub-depth has been
successfully used in several research papers. Here we provide an in-depth
review of the definition and basic properties of shrub-depth, and we focus on
its logical aspects which turned out to be most useful. In particular, we use
shrub-depth to give a characterization of the lower ${\omega}$ levels of the
MSO1 transduction hierarchy of simple graphs. | [
1,
0,
0,
0,
0,
0
] |
Title: Detecting Statistical Interactions from Neural Network Weights,
Abstract: Interpreting neural networks is a crucial and challenging task in machine
learning. In this paper, we develop a novel framework for detecting statistical
interactions captured by a feedforward multilayer neural network by directly
interpreting its learned weights. Depending on the desired interactions, our
method can achieve significantly better or similar interaction detection
performance compared to the state-of-the-art without searching an exponential
solution space of possible interactions. We obtain this accuracy and efficiency
by observing that interactions between input features are created by the
non-additive effect of nonlinear activation functions, and that interacting
paths are encoded in weight matrices. We demonstrate the performance of our
method and the importance of discovered interactions via experimental results
on both synthetic datasets and real-world application datasets. | [
1,
0,
0,
1,
0,
0
] |
Title: Making intersections safer with I2V communication,
Abstract: Intersections are hazardous places. Threats arise from interactions among
pedestrians, bicycles and vehicles, more complicated vehicle trajectories in
the absence of lane markings, phases that prevent determining who has the right
of way, invisible vehicle approaches, vehicle obstructions, and illegal
movements. These challenges are not fully addressed by the "road diet" and road
redesign prescribed in Vision Zero plans, nor will they be completely overcome
by autonomous vehicles with their many sensors and tireless attention to
surroundings. Accidents can also occur because drivers, cyclists and
pedestrians do not have the information they need to avoid wrong decisions. In
these cases, the missing information can be computed and broadcast by an
intelligent intersection. The information gives the current full signal phase,
an estimate of the time when the phase will change, and the occupancy of the
blind spots of the driver or autonomous vehicle. The paper develops a design of
the intelligent intersection, motivated by the analysis of an accident at an
intersection in Tempe, AZ, between an automated Uber Volvo and a manual Honda
CRV and culminates in a proposal for an intelligent intersection
infrastructure. The intelligent intersection also serves as a software-enabled
version of the `protected intersection' design to improve the passage of
cyclists and pedestrians through an intersection. | [
1,
0,
0,
0,
0,
0
] |
Title: Convolutional Sparse Representations with Gradient Penalties,
Abstract: While convolutional sparse representations enjoy a number of useful
properties, they have received limited attention for image reconstruction
problems. The present paper compares the performance of block-based and
convolutional sparse representations in the removal of Gaussian white noise.
While the usual formulation of the convolutional sparse coding problem is
slightly inferior to the block-based representations in this problem, the
performance of the convolutional form can be boosted beyond that of the
block-based form by the inclusion of suitable penalties on the gradients of the
coefficient maps. | [
1,
0,
0,
0,
0,
0
] |
Title: On the origin of the crescent-shaped distributions observed by MMS at the magnetopause,
Abstract: MMS observations recently confirmed that crescent-shaped electron velocity
distributions in the plane perpendicular to the magnetic field occur in the
electron diffusion region near reconnection sites at Earth's magnetopause. In
this paper, we re-examine the origin of the crescent-shaped distributions in
the light of our new finding that ions and electrons are drifting in opposite
directions when displayed in magnetopause boundary-normal coordinates.
Therefore, ExB drifts cannot cause the crescent shapes. We performed a
high-resolution multi-scale simulation capturing sub-electron skin depth
scales. The results suggest that the crescent-shaped distributions are caused
by meandering orbits without necessarily requiring any additional processes
found at the magnetopause such as the highly asymmetric magnetopause ambipolar
electric field. We use an adiabatic Hamiltonian model of particle motion to
confirm that conservation of canonical momentum in the presence of magnetic
field gradients causes the formation of crescent shapes without invoking
asymmetries or the presence of an ExB drift. An important consequence of this
finding is that we expect crescent-shaped distributions also to be observed in
the magnetotail, a prediction that MMS will soon be able to test. | [
0,
1,
0,
0,
0,
0
] |
Title: Projected support points: a new method for high-dimensional data reduction,
Abstract: In an era where big and high-dimensional data is readily available, data
scientists are inevitably faced with the challenge of reducing this data for
expensive downstream computation or analysis. To this end, we present here a
new method for reducing high-dimensional big data into a representative point
set, called projected support points (PSPs). A key ingredient in our method is
the so-called sparsity-inducing (SpIn) kernel, which encourages the
preservation of low-dimensional features when reducing high-dimensional data.
We begin by introducing a unifying theoretical framework for data reduction,
connecting PSPs with fundamental sampling principles from experimental design
and Quasi-Monte Carlo. Through this framework, we then derive sparsity
conditions under which the curse-of-dimensionality in data reduction can be
lifted for our method. Next, we propose two algorithms for one-shot and
sequential reduction via PSPs, both of which exploit big data subsampling and
majorization-minimization for efficient optimization. Finally, we demonstrate
the practical usefulness of PSPs in two real-world applications, the first for
data reduction in kernel learning, and the second for reducing Markov Chain
Monte Carlo (MCMC) chains. | [
0,
0,
0,
1,
0,
0
] |
Title: AutonoVi: Autonomous Vehicle Planning with Dynamic Maneuvers and Traffic Constraints,
Abstract: We present AutonoVi:, a novel algorithm for autonomous vehicle navigation
that supports dynamic maneuvers and satisfies traffic constraints and norms.
Our approach is based on optimization-based maneuver planning that supports
dynamic lane-changes, swerving, and braking in all traffic scenarios and guides
the vehicle to its goal position. We take into account various traffic
constraints, including collision avoidance with other vehicles, pedestrians,
and cyclists using control velocity obstacles. We use a data-driven approach to
model the vehicle dynamics for control and collision avoidance. Furthermore,
our trajectory computation algorithm takes into account traffic rules and
behaviors, such as stopping at intersections and stoplights, based on an
arc-spline representation. We have evaluated our algorithm in a simulated
environment and tested its interactive performance in urban and highway driving
scenarios with tens of vehicles, pedestrians, and cyclists. These scenarios
include jaywalking pedestrians, sudden stops from high speeds, safely passing
cyclists, a vehicle suddenly swerving into the roadway, and high-density
traffic where the vehicle must change lanes to progress more effectively. | [
1,
0,
0,
0,
0,
0
] |
Title: On SGD's Failure in Practice: Characterizing and Overcoming Stalling,
Abstract: Stochastic Gradient Descent (SGD) is widely used in machine learning problems
to efficiently perform empirical risk minimization, yet, in practice, SGD is
known to stall before reaching the actual minimizer of the empirical risk. SGD
stalling has often been attributed to its sensitivity to the conditioning of
the problem; however, as we demonstrate, SGD will stall even when applied to a
simple linear regression problem with unity condition number for standard
learning rates. Thus, in this work, we numerically demonstrate and
mathematically argue that stalling is a crippling and generic limitation of SGD
and its variants in practice. Once we have established the problem of stalling,
we generalize an existing framework for hedging against its effects, which (1)
deters SGD and its variants from stalling, (2) still provides convergence
guarantees, and (3) makes SGD and its variants more practical methods for
minimization. | [
0,
0,
1,
1,
0,
0
] |
Title: Composite Rational Functions and Arithmetic Progressions,
Abstract: In this paper we deal with composite rational functions having zeros and
poles forming consecutive elements of an arithmetic progression. We also
correct a result published earlier related to composite rational functions
having a fixed number of zeros and poles. | [
0,
0,
1,
0,
0,
0
] |
Title: The Geometry of Concurrent Interaction: Handling Multiple Ports by Way of Multiple Tokens (Long Version),
Abstract: We introduce a geometry of interaction model for Mazza's multiport
interaction combinators, a graph-theoretic formalism which is able to
faithfully capture concurrent computation as embodied by process algebras like
the $\pi$-calculus. The introduced model is based on token machines in which
not one but multiple tokens are allowed to traverse the underlying net at the
same time. We prove soundness and adequacy of the introduced model. The former
is proved as a simulation result between the token machines one obtains along
any reduction sequence. The latter is obtained by a fine analysis of
convergence, both in nets and in token machines. | [
1,
0,
0,
0,
0,
0
] |
Title: A High Space Density of Luminous Lyman Alpha Emitters at z~6.5,
Abstract: We present the results of a systematic search for Lyman-alpha emitters (LAEs)
at $6 \lesssim z \lesssim 7.6$ using the HST WFC3 Infrared Spectroscopic
Parallel (WISP) Survey. Our total volume over this redshift range is $\sim 8
\times10^5$ Mpc$^3$, comparable to many of the narrowband surveys despite their
larger area coverage. We find two LAEs at $z=6.38$ and $6.44$ with line
luminosities of L$_{\mathrm{Ly}\alpha} \sim 4.7 \times 10^{43}$ erg s$^{-1}$,
putting them among the brightest LAEs discovered at these redshifts. Taking
advantage of the broad spectral coverage of WISP, we are able to rule out
almost all lower-redshift contaminants. The WISP LAEs have a high number
density of $7.7\times10^{-6}$ Mpc$^{-3}$. We argue that the LAEs reside in
Mpc-scale ionized bubbles that allow the Lyman-alpha photons to redshift out of
resonance before encountering the neutral IGM. We discuss possible ionizing
sources and conclude that the observed LAEs alone are not sufficient to ionize
the bubbles. | [
0,
1,
0,
0,
0,
0
] |
Title: SYZ transforms for immersed Lagrangian multi-sections,
Abstract: In this paper, we study the geometry of the SYZ transform on a semi-flat
Lagrangian torus fibration. Our starting point is an investigation on the
relation between Lagrangian surgery of a pair of straight lines in a symplectic
2-torus and extension of holomorphic vector bundles over the mirror elliptic
curve, via the SYZ transform for immersed Lagrangian multi-sections. This study
leads us to a new notion of equivalence between objects in the immersed Fukaya
category of a general compact symplectic manifold $(M, \omega)$, under which
the immersed Floer cohomology is invariant; in particular, this provides an
answer to a question of Akaho-Joyce. Furthermore, if $M$ admits a Lagrangian
torus fibration over an integral affine manifold, we prove, under some
additional assumptions, that this new equivalence is mirror to isomorphism
between holomorphic vector bundles over the dual torus fibration via the SYZ
transform. | [
0,
0,
1,
0,
0,
0
] |
Title: Topological Representation of the Transit Sets of k-Point Crossover Operators,
Abstract: $k$-point crossover operators and their recombination sets are studied from
different perspectives. We show that transit functions of $k$-point crossover
generate, for all $k>1$, the same convexity as the interval function of the
underlying graph. This settles in the negative an open problem by Mulder about
whether the geodesic convexity of a connected graph $G$ is uniquely determined
by its interval function $I$. The conjecture of Gitchoff and Wagner that for
each transit set $R_k(x,y)$ distinct from a hypercube there is a unique pair of
parents from which it is generated is settled affirmatively. Along the way we
characterize transit functions whose underlying graphs are Hamming graphs, and
those with underlying partial cube graphs. For general values of $k$ it is
shown that the transit sets of $k$-point crossover operators are the subsets
with maximal Vapnik-Chervonenkis dimension. Moreover, the transit sets of
$k$-point crossover on binary strings form topes of uniform oriented matroid of
VC-dimension $k+1$. The Topological Representation Theorem for oriented
matroids therefore implies that $k$-point crossover operators can be
represented by pseudosphere arrangements. This provides the tools necessary to
study the special case $k=2$ in detail. | [
1,
0,
1,
0,
0,
0
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.