text
stringlengths
138
2.38k
labels
sequencelengths
6
6
Predictions
sequencelengths
1
3
Title: Accurate ranking of influential spreaders in networks based on dynamically asymmetric link-impact, Abstract: We propose an efficient and accurate measure for ranking spreaders and identifying the influential ones in spreading processes in networks. While the edges determine the connections among the nodes, their specific role in spreading should be considered explicitly. An edge connecting nodes i and j may differ in its importance for spreading from i to j and from j to i. The key issue is whether node j, after infected by i through the edge, would reach out to other nodes that i itself could not reach directly. It becomes necessary to invoke two unequal weights wij and wji characterizing the importance of an edge according to the neighborhoods of nodes i and j. The total asymmetric directional weights originating from a node leads to a novel measure si which quantifies the impact of the node in spreading processes. A s-shell decomposition scheme further assigns a s-shell index or weighted coreness to the nodes. The effectiveness and accuracy of rankings based on si and the weighted coreness are demonstrated by applying them to nine real-world networks. Results show that they generally outperform rankings based on the nodes' degree and k-shell index, while maintaining a low computational complexity. Our work represents a crucial step towards understanding and controlling the spread of diseases, rumors, information, trends, and innovations in networks.
[ 1, 1, 0, 0, 0, 0 ]
[ "Computer Science", "Physics", "Mathematics" ]
Title: Sphere geometry and invariants, Abstract: A finite abstract simplicial complex G defines two finite simple graphs: the Barycentric refinement G1, connecting two simplices if one is a subset of the other and the connection graph G', connecting two simplices if they intersect. We prove that the Poincare-Hopf value i(x)=1-X(S(x)), where X is Euler characteristics and S(x) is the unit sphere of a vertex x in G1, agrees with the Green function value g(x,x),the diagonal element of the inverse of (1+A'), where A' is the adjacency matrix of G'. By unimodularity, det(1+A') is the product of parities (-1)^dim(x) of simplices in G, the Fredholm matrix 1+A' is in GL(n,Z), where n is the number of simplices in G. We show that the set of possible unit sphere topologies in G1 is a combinatorial invariant of the complex G. So, also the Green function range of G is a combinatorial invariant. To prove the invariance of the unit sphere topology we use that all unit spheres in G1 decompose as a join of a stable and unstable part. The join operation + renders the category X of simplicial complexes into a monoid, where the empty complex is the 0 element and the cone construction adds 1. The augmented Grothendieck group (X,+,0) contains the graph and sphere monoids (Graphs, +,0) and (Spheres,+,0). The Poincare-Hopf functionals i(G) as well as the volume are multiplicative functions on (X,+). For the sphere group, both i(G) as well as Fredholm characteristic are characters. The join + can be augmented with a product * so that we have a commutative ring (X,+,0,*,1)for which there are both additive and multiplicative primes and which contains as a subring of signed complete complexes isomorphic to the integers (Z,+,0,*,1). We also look at the spectrum of the Laplacian of the join of two graphs. Both for addition + and multiplication *, one can ask whether unique prime factorization holds.
[ 1, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Index Search Algorithms for Databases and Modern CPUs, Abstract: Over the years, many different indexing techniques and search algorithms have been proposed, including CSS-trees, CSB+ trees, k-ary binary search, and fast architecture sensitive tree search. There have also been papers on how best to set the many different parameters of these index structures, such as the node size of CSB+ trees. These indices have been proposed because CPU speeds have been increasing at a dramatically higher rate than memory speeds, giving rise to the Von Neumann CPU--Memory bottleneck. To hide the long latencies caused by memory access, it has become very important to well-utilize the features of modern CPUs. In order to drive down the average number of CPU clock cycles required to execute CPU instructions, and thus increase throughput, it has become important to achieve a good utilization of CPU resources. Some of these are the data and instruction caches, and the translation lookaside buffers. But it also has become important to avoid branch misprediction penalties, and utilize vectorization provided by CPUs in the form of SIMD instructions. While the layout of index structures has been heavily optimized for the data cache of modern CPUs, the instruction cache has been neglected so far. In this paper, we present NitroGen, a framework for utilizing code generation for speeding up index traversal in main memory database systems. By bringing together data and code, we make index structures use the dormant resource of the instruction cache. We show how to combine index compilation with previous approaches, such as binary tree search, cache-sensitive tree search, and the architecture-sensitive tree search presented by Kim et al.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: An Efficient Algorithm for the Multicomponent Compressible Navier-Stokes Equations in Low- and High-Mach Number Regimes, Abstract: The goal of this study is to develop an efficient numerical algorithm applicable to a wide range of compressible multicomponent flows. Although many highly efficient algorithms have been proposed for simulating each type of the flows, the construction of a universal solver is known to be challenging. Extreme cases, such as incompressible and highly compressible flows, or inviscid and highly viscous flows, require different numerical treatments in order to maintain the efficiency, stability, and accuracy of the method. Linearized block implicit (LBI) factored schemes are known to provide an efficient way of solving the compressible Navier-Stokes equations implicitly, allowing us to avoid stability restrictions at low Mach number and high viscosity. However, the methods' splitting error has been shown to grow and dominate physical fluxes as the Mach number goes to zero. In this paper, a splitting error reduction technique is proposed to solve the issue. A novel finite element shock-capturing algorithm, proposed by Guermond and Popov, is reformulated in terms of finite differences, extended to the stiffened gas equation of state (SG EOS) and combined with the LBI factored scheme to stabilize the method around flow discontinuities at high Mach numbers. A novel stabilization term is proposed for low Mach number applications. The resulting algorithm is shown to be efficient in both low and high Mach number regimes. The algorithm is extended to the multicomponent case using an interface capturing strategy with surface tension as a continuous surface force. Numerical tests are presented to verify the performance and stability properties for a wide range of flows.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics", "Computer Science" ]
Title: Estimating Heterogeneous Causal Effects in the Presence of Irregular Assignment Mechanisms, Abstract: This paper provides a link between causal inference and machine learning techniques - specifically, Classification and Regression Trees (CART) - in observational studies where the receipt of the treatment is not randomized, but the assignment to the treatment can be assumed to be randomized (irregular assignment mechanism). The paper contributes to the growing applied machine learning literature on causal inference, by proposing a modified version of the Causal Tree (CT) algorithm to draw causal inference from an irregular assignment mechanism. The proposed method is developed by merging the CT approach with the instrumental variable framework to causal inference, hence the name Causal Tree with Instrumental Variable (CT-IV). As compared to CT, the main strength of CT-IV is that it can deal more efficiently with the heterogeneity of causal effects, as demonstrated by a series of numerical results obtained on synthetic data. Then, the proposed algorithm is used to evaluate a public policy implemented by the Tuscan Regional Administration (Italy), which aimed at easing the access to credit for small firms. In this context, CT-IV breaks fresh ground for target-based policies, identifying interesting heterogeneous causal effects.
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Computer Science", "Quantitative Finance" ]
Title: Fourth-order time-stepping for stiff PDEs on the sphere, Abstract: We present in this paper algorithms for solving stiff PDEs on the unit sphere with spectral accuracy in space and fourth-order accuracy in time. These are based on a variant of the double Fourier sphere method in coefficient space with multiplication matrices that differ from the usual ones, and implicit-explicit time-stepping schemes. Operating in coefficient space with these new matrices allows one to use a sparse direct solver, avoids the coordinate singularity and maintains smoothness at the poles, while implicit-explicit schemes circumvent severe restrictions on the time-steps due to stiffness. A comparison is made against exponential integrators and it is found that implicit-explicit schemes perform best. Implementations in MATLAB and Chebfun make it possible to compute the solution of many PDEs to high accuracy in a very convenient fashion.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: On Formalizing Fairness in Prediction with Machine Learning, Abstract: Machine learning algorithms for prediction are increasingly being used in critical decisions affecting human lives. Various fairness formalizations, with no firm consensus yet, are employed to prevent such algorithms from systematically discriminating against people based on certain attributes protected by law. The aim of this article is to survey how fairness is formalized in the machine learning literature for the task of prediction and present these formalizations with their corresponding notions of distributive justice from the social sciences literature. We provide theoretical as well as empirical critiques of these notions from the social sciences literature and explain how these critiques limit the suitability of the corresponding fairness formalizations to certain domains. We also suggest two notions of distributive justice which address some of these critiques and discuss avenues for prospective fairness formalizations.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: What do we know about the geometry of space?, Abstract: The belief that three dimensional space is infinite and flat in the absence of matter is a canon of physics that has been in place since the time of Newton. The assumption that space is flat at infinity has guided several modern physical theories. But what do we actually know to support this belief? A simple argument, called the "Telescope Principle", asserts that all that we can know about space is bounded by observations. Physical theories are best when they can be verified by observations, and that should also apply to the geometry of space. The Telescope Principle is simple to state, but it leads to very interesting insights into relativity and Yang-Mills theory via projective equivalences of their respective spaces.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Measuring the effects of Loop Quantum Cosmology in the CMB data, Abstract: In this Essay we investigate the observational signatures of Loop Quantum Cosmology (LQC) in the CMB data. First, we concentrate on the dynamics of LQC and we provide the basic cosmological functions. We then obtain the power spectrum of scalar and tensor perturbations in order to study the performance of LQC against the latest CMB data. We find that LQC provides a robust prediction for the main slow-roll parameters, like the scalar spectral index and the tensor-to-scalar fluctuation ratio, which are in excellent agreement within $1\sigma$ with the values recently measured by the Planck collaboration. This result indicates that LQC can be seen as an alternative scenario with respect to that of standard inflation.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization, Abstract: In many modern machine learning applications, structures of underlying mathematical models often yield nonconvex optimization problems. Due to the intractability of nonconvexity, there is a rising need to develop efficient methods for solving general nonconvex problems with certain performance guarantee. In this work, we investigate the accelerated proximal gradient method for nonconvex programming (APGnc). The method compares between a usual proximal gradient step and a linear extrapolation step, and accepts the one that has a lower function value to achieve a monotonic decrease. In specific, under a general nonsmooth and nonconvex setting, we provide a rigorous argument to show that the limit points of the sequence generated by APGnc are critical points of the objective function. Then, by exploiting the Kurdyka-{\L}ojasiewicz (\KL) property for a broad class of functions, we establish the linear and sub-linear convergence rates of the function value sequence generated by APGnc. We further propose a stochastic variance reduced APGnc (SVRG-APGnc), and establish its linear convergence under a special case of the \KL property. We also extend the analysis to the inexact version of these methods and develop an adaptive momentum strategy that improves the numerical performance.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics", "Statistics" ]
Title: Exploring the Interconnectedness of Cryptocurrencies using Correlation Networks, Abstract: Correlation networks were used to detect characteristics which, although fixed over time, have an important influence on the evolution of prices over time. Potentially important features were identified using the websites and whitepapers of cryptocurrencies with the largest userbases. These were assessed using two datasets to enhance robustness: one with fourteen cryptocurrencies beginning from 9 November 2017, and a subset with nine cryptocurrencies starting 9 September 2016, both ending 6 March 2018. Separately analysing the subset of cryptocurrencies raised the number of data points from 115 to 537, and improved robustness to changes in relationships over time. Excluding USD Tether, the results showed a positive association between different cryptocurrencies that was statistically significant. Robust, strong positive associations were observed for six cryptocurrencies where one was a fork of the other; Bitcoin / Bitcoin Cash was an exception. There was evidence for the existence of a group of cryptocurrencies particularly associated with Cardano, and a separate group correlated with Ethereum. The data was not consistent with a token's functionality or creation mechanism being the dominant determinants of the evolution of prices over time but did suggest that factors other than speculation contributed to the price.
[ 0, 0, 0, 0, 0, 1 ]
[ "Quantitative Finance", "Statistics" ]
Title: World Literature According to Wikipedia: Introduction to a DBpedia-Based Framework, Abstract: Among the manifold takes on world literature, it is our goal to contribute to the discussion from a digital point of view by analyzing the representation of world literature in Wikipedia with its millions of articles in hundreds of languages. As a preliminary, we introduce and compare three different approaches to identify writers on Wikipedia using data from DBpedia, a community project with the goal of extracting and providing structured information from Wikipedia. Equipped with our basic set of writers, we analyze how they are represented throughout the 15 biggest Wikipedia language versions. We combine intrinsic measures (mostly examining the connectedness of articles) with extrinsic ones (analyzing how often articles are frequented by readers) and develop methods to evaluate our results. The better part of our findings seems to convey a rather conservative, old-fashioned version of world literature, but a version derived from reproducible facts revealing an implicit literary canon based on the editing and reading behavior of millions of people. While still having to solve some known issues, the introduced methods will help us build an observatory of world literature to further investigate its representativeness and biases.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Approximation Fixpoint Theory and the Well-Founded Semantics of Higher-Order Logic Programs, Abstract: We define a novel, extensional, three-valued semantics for higher-order logic programs with negation. The new semantics is based on interpreting the types of the source language as three-valued Fitting-monotonic functions at all levels of the type hierarchy. We prove that there exists a bijection between such Fitting-monotonic functions and pairs of two-valued-result functions where the first member of the pair is monotone-antimonotone and the second member is antimonotone-monotone. By deriving an extension of consistent approximation fixpoint theory (Denecker et al. 2004) and utilizing the above bijection, we define an iterative procedure that produces for any given higher-order logic program a distinguished extensional model. We demonstrate that this model is actually a minimal one. Moreover, we prove that our construction generalizes the familiar well-founded semantics for classical logic programs, making in this way our proposal an appealing formulation for capturing the well-founded semantics for higher-order logic programs. This paper is under consideration for acceptance in TPLP.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Quasi-two-dimensional Fermi surfaces with localized $f$ electrons in the layered heavy-fermion compound CePt$_2$In$_7$, Abstract: We report measurements of the de Haas-van Alphen effect in the layered heavy-fermion compound CePt$_2$In$_7$ in high magnetic fields up to 35 T. Above an angle-dependent threshold field, we observed several de Haas-van Alphen frequencies originating from almost ideally two-dimensional Fermi surfaces. The frequencies are similar to those previously observed to develop only above a much higher field of 45 T, where a clear anomaly was detected and proposed to originate from a change in the electronic structure [M. M. Altarawneh et al., Phys. Rev. B 83, 081103 (2011)]. Our experimental results are compared with band structure calculations performed for both CePt$_2$In$_7$ and LaPt$_2$In$_7$, and the comparison suggests localized $f$ electrons in CePt$_2$In$_7$. This conclusion is further supported by comparing experimentally observed Fermi surfaces in CePt$_2$In$_7$ and PrPt$_2$In$_7$, which are found to be almost identical. The measured effective masses in CePt$_2$In$_7$ are only moderately enhanced above the bare electron mass $m_0$, from 2$m_0$ to 6$m_0$.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Differential Forms, Linked Fields and the $u$-Invariant, Abstract: We associate an Albert form to any pair of cyclic algebras of prime degree $p$ over a field $F$ with $\operatorname{char}(F)=p$ which coincides with the classical Albert form when $p=2$. We prove that if every Albert form is isotropic then $H^4(F)=0$. As a result, we obtain that if $F$ is a linked field with $\operatorname{char}(F)=2$ then its $u$-invariant is either $0,2,4$ or $8$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Analyzing Hidden Representations in End-to-End Automatic Speech Recognition Systems, Abstract: Neural models have become ubiquitous in automatic speech recognition systems. While neural networks are typically used as acoustic models in more complex systems, recent studies have explored end-to-end speech recognition systems based on neural networks, which can be trained to directly predict text from input acoustic features. Although such systems are conceptually elegant and simpler than traditional systems, it is less obvious how to interpret the trained models. In this work, we analyze the speech representations learned by a deep end-to-end model that is based on convolutional and recurrent layers, and trained with a connectionist temporal classification (CTC) loss. We use a pre-trained model to generate frame-level features which are given to a classifier that is trained on frame classification into phones. We evaluate representations from different layers of the deep model and compare their quality for predicting phone labels. Our experiments shed light on important aspects of the end-to-end model such as layer depth, model complexity, and other design choices.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: On the Fine-Grained Complexity of Empirical Risk Minimization: Kernel Methods and Neural Networks, Abstract: Empirical risk minimization (ERM) is ubiquitous in machine learning and underlies most supervised learning methods. While there has been a large body of work on algorithms for various ERM problems, the exact computational complexity of ERM is still not understood. We address this issue for multiple popular ERM problems including kernel SVMs, kernel ridge regression, and training the final layer of a neural network. In particular, we give conditional hardness results for these problems based on complexity-theoretic assumptions such as the Strong Exponential Time Hypothesis. Under these assumptions, we show that there are no algorithms that solve the aforementioned ERM problems to high accuracy in sub-quadratic time. We also give similar hardness results for computing the gradient of the empirical loss, which is the main computational burden in many non-convex learning tasks.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Deep Learning for Predicting Asset Returns, Abstract: Deep learning searches for nonlinear factors for predicting asset returns. Predictability is achieved via multiple layers of composite factors as opposed to additive ones. Viewed in this way, asset pricing studies can be revisited using multi-layer deep learners, such as rectified linear units (ReLU) or long-short-term-memory (LSTM) for time-series effects. State-of-the-art algorithms including stochastic gradient descent (SGD), TensorFlow and dropout design provide imple- mentation and efficient factor exploration. To illustrate our methodology, we revisit the equity market risk premium dataset of Welch and Goyal (2008). We find the existence of nonlinear factors which explain predictability of returns, in particular at the extremes of the characteristic space. Finally, we conclude with directions for future research.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Quantitative Finance" ]
Title: An EM Based Probabilistic Two-Dimensional CCA with Application to Face Recognition, Abstract: Recently, two-dimensional canonical correlation analysis (2DCCA) has been successfully applied for image feature extraction. The method instead of concatenating the columns of the images to the one-dimensional vectors, directly works with two-dimensional image matrices. Although 2DCCA works well in different recognition tasks, it lacks a probabilistic interpretation. In this paper, we present a probabilistic framework for 2DCCA called probabilistic 2DCCA (P2DCCA) and an iterative EM based algorithm for optimizing the parameters. Experimental results on synthetic and real data demonstrate superior performance in loading factor estimation for P2DCCA compared to 2DCCA. For real data, three subsets of AR face database and also the UMIST face database confirm the robustness of the proposed algorithm in face recognition tasks with different illumination conditions, facial expressions, poses and occlusions.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Changing Fashion Cultures, Abstract: The paper presents a novel concept that analyzes and visualizes worldwide fashion trends. Our goal is to reveal cutting-edge fashion trends without displaying an ordinary fashion style. To achieve the fashion-based analysis, we created a new fashion culture database (FCDB), which consists of 76 million geo-tagged images in 16 cosmopolitan cities. By grasping a fashion trend of mixed fashion styles,the paper also proposes an unsupervised fashion trend descriptor (FTD) using a fashion descriptor, a codeword vetor, and temporal analysis. To unveil fashion trends in the FCDB, the temporal analysis in FTD effectively emphasizes consecutive features between two different times. In experiments, we clearly show the analysis of fashion trends and fashion-based city similarity. As the result of large-scale data collection and an unsupervised analyzer, the proposed approach achieves world-level fashion visualization in a time series. The code, model, and FCDB will be publicly available after the construction of the project page.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: A strong failure of aleph_0-stability for atomic classes, Abstract: We study classes of atomic models At_T of a countable, complete first-order theory T . We prove that if At_T is not pcl-small, i.e., there is an atomic model N that realizes uncountably many types over pcl(a) for some finite tuple a from N, then there are 2^aleph1 non-isomorphic atomic models of T, each of size aleph1.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Sub-Gaussian estimators of the mean of a random vector, Abstract: We study the problem of estimating the mean of a random vector $X$ given a sample of $N$ independent, identically distributed points. We introduce a new estimator that achieves a purely sub-Gaussian performance under the only condition that the second moment of $X$ exists. The estimator is based on a novel concept of a multivariate median.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Mathematics" ]
Title: A Survey of Bandwidth and Latency Enhancement Approaches for Mobile Cloud Game Multicasting, Abstract: Among mobile cloud applications, mobile cloud gaming has gained a significant popularity in the recent years. In mobile cloud games, textures, game objects, and game events are typically streamed from a server to the mobile client. One of the challenges in cloud mobile gaming is how to efficiently multicast gaming contents and updates in Massively Multi-player Online Games (MMOGs). This report surveys the state of art techniques introduced for game synchronization and multicasting mechanisms to decrease latency and bandwidth consumption, and discuss several schemes that have been proposed in this area that can be applied to any networked gaming context. From our point of view, gaming applications demand high interactivity. Therefore, concentrating on gaming applications will eventually cover a wide range of applications without violating the limited scope of this survey.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: A Distributed Algorithm for Computing a Common Fixed Point of a Finite Family of Paracontractions, Abstract: A distributed algorithm is described for finding a common fixed point of a family of m>1 nonlinear maps M_i : R^n -> R^n assuming that each map is a paracontraction and that at least one such common fixed point exists. The common fixed point is simultaneously computed by m agents assuming each agent i knows only M_i, the current estimates of the fixed point generated by its neighbors, and nothing more. Each agent recursively updates its estimate of a fixed point by utilizing the current estimates generated by each of its neighbors. Neighbor relations are characterized by a time-varying directed graph N(t). It is shown under suitably general conditions on N(t), that the algorithm causes all agents estimates to converge to the same common fixed point of the m nonlinear maps.
[ 1, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Exponential Moving Average Model in Parallel Speech Recognition Training, Abstract: As training data rapid growth, large-scale parallel training with multi-GPUs cluster is widely applied in the neural network model learning currently.We present a new approach that applies exponential moving average method in large-scale parallel training of neural network model. It is a non-interference strategy that the exponential moving average model is not broadcasted to distributed workers to update their local models after model synchronization in the training process, and it is implemented as the final model of the training system. Fully-connected feed-forward neural networks (DNNs) and deep unidirectional Long short-term memory (LSTM) recurrent neural networks (RNNs) are successfully trained with proposed method for large vocabulary continuous speech recognition on Shenma voice search data in Mandarin. The character error rate (CER) of Mandarin speech recognition further degrades than state-of-the-art approaches of parallel training.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: The BCS critical temperature in a weak homogeneous magnetic field, Abstract: We show that, within a linear approximation of BCS theory, a weak homogeneous magnetic field lowers the critical temperature by an explicit constant times the field strength, up to higher order terms. This provides a rigorous derivation and generalization of results obtained in the physics literature from WHH theory of the upper critical magnetic field. A new ingredient in our proof is a rigorous phase approximation to control the effects of the magnetic field.
[ 0, 1, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: A Comparative Study of Full-Duplex Relaying Schemes for Low Latency Applications, Abstract: Various sectors are likely to carry a set of emerging applications while targeting a reliable communication with low latency transmission. To address this issue, upon a spectrally-efficient transmission, this paper investigates the performance of a one full-dulpex (FD) relay system, and considers for that purpose, two basic relaying schemes, namely the symbol-by-symbol transmission, i.e., amplify-and-forward (AF) and the block-by-block transmission, i.e., selective decode-and-forward (SDF). The conducted analysis presents an exhaustive comparison, covering both schemes, over two different transmission modes, i.e., the non combining mode where the best link, direct or relay link is decoded and the signals combining mode, where direct and relay links are combined at the receiver side. While targeting latency purpose as a necessity, simulations show a refined results of performed comparisons, and reveal that AF relaying scheme is more adapted to combining mode, whereas the SDF relaying scheme is more suitable for non combining mode.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Massively parallel multicanonical simulations, Abstract: Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free- energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of $10^4$ parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Computer Science" ]
Title: Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals, Abstract: Interpretability of deep neural networks is a recently emerging area of machine learning research targeting a better understanding of how models perform feature selection and derive their classification decisions. In this paper, two neural network architectures are trained on spectrogram and raw waveform data for audio classification tasks on a newly created audio dataset and layer-wise relevance propagation (LRP), a previously proposed interpretability method, is applied to investigate the models' feature selection and decision making. It is demonstrated that the networks are highly reliant on feature marked as relevant by LRP through systematic manipulation of the input data. Our results show that by making deep audio classifiers interpretable, one can analyze and compare the properties and strategies of different models beyond classification accuracy, which potentially opens up new ways for model improvements.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Fourier dimension and spectral gaps for hyperbolic surfaces, Abstract: We obtain an essential spectral gap for a convex co-compact hyperbolic surface $M=\Gamma\backslash\mathbb H^2$ which depends only on the dimension $\delta$ of the limit set. More precisely, we show that when $\delta>0$ there exists $\varepsilon_0=\varepsilon_0(\delta)>0$ such that the Selberg zeta function has only finitely many zeroes $s$ with $\Re s>\delta-\varepsilon_0$. The proof uses the fractal uncertainty principle approach developed by Dyatlov-Zahl [arXiv:1504.06589]. The key new component is a Fourier decay bound for the Patterson-Sullivan measure, which may be of independent interest. This bound uses the fact that transformations in the group $\Gamma$ are nonlinear, together with estimates on exponential sums due to Bourgain which follow from the discretized sum-product theorem in $\mathbb R$.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Soft modes and strain redistribution in continuous models of amorphous plasticity: the Eshelby paradigm, and beyond?, Abstract: The deformation of disordered solids relies on swift and localised rearrangements of particles. The inspection of soft vibrational modes can help predict the locations of these rearrangements, while the strain that they actually redistribute mediates collective effects. Here, we study soft modes and strain redistribution in a two-dimensional continuous mesoscopic model based on a Ginzburg-Landau free energy for perfect solids, supplemented with a plastic disorder potential that accounts for shear softening and rearrangements. Regardless of the disorder strength, our numerical simulations show soft modes that are always sharply peaked at the softest point of the material (unlike what happens for the depinning of an elastic interface). Contrary to widespread views, the deformation halo around this peak does not always have a quadrupolar (Eshelby-like) shape. Instead, for finite and narrowly-distributed disorder, it looks like a fracture, with a strain field that concentrates along some easy directions. These findings are rationalised with analytical calculations in the case where the plastic disorder is confined to a point-like `impurity'. In this case, we unveil a continuous family of elastic propagators, which are identical for the soft modes and for the equilibrium configurations. This family interpolates between the standard quadrupolar propagator and the fracture-like one as the anisotropy of the elastic medium is increased. Therefore, we expect to see a fracture-like propagator when extended regions on the brink of failure have already softened along the shear direction and thus rendered the material anisotropic, but not failed yet. We speculate that this might be the case in carefully aged glasses just before macroscopic failure.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Flexible Deep Neural Network Processing, Abstract: The recent success of Deep Neural Networks (DNNs) has drastically improved the state of the art for many application domains. While achieving high accuracy performance, deploying state-of-the-art DNNs is a challenge since they typically require billions of expensive arithmetic computations. In addition, DNNs are typically deployed in ensemble to boost accuracy performance, which further exacerbates the system requirements. This computational overhead is an issue for many platforms, e.g. data centers and embedded systems, with tight latency and energy budgets. In this article, we introduce flexible DNNs ensemble processing technique, which achieves large reduction in average inference latency while incurring small to negligible accuracy drop. Our technique is flexible in that it allows for dynamic adaptation between quality of results (QoR) and execution runtime. We demonstrate the effectiveness of the technique on AlexNet and ResNet-50 using the ImageNet dataset. This technique can also easily handle other types of networks.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science" ]
Title: Cross-label Suppression: A Discriminative and Fast Dictionary Learning with Group Regularization, Abstract: This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose cross-label suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used $\ell_0$-norm or $\ell_1$-norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Eigenvalues of compactly perturbed operators via entropy numbers, Abstract: We derive new estimates for the number of discrete eigenvalues of compactly perturbed operators on Banach spaces, assuming that the perturbing operator is an element of a weak entropy number ideal. Our results improve upon earlier results by the author and by Demuth et al. The main tool in our proofs is an inequality of Carl. In particular, in contrast to all previous results we do not rely on tools from complex analysis.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: A data driven trimming procedure for robust classification, Abstract: Classification rules can be severely affected by the presence of disturbing observations in the training sample. Looking for an optimal classifier with such data may lead to unnecessarily complex rules. So, simpler effective classification rules could be achieved if we relax the goal of fitting a good rule for the whole training sample but only consider a fraction of the data. In this paper we introduce a new method based on trimming to produce classification rules with guaranteed performance on a significant fraction of the data. In particular, we provide an automatic way of determining the right trimming proportion and obtain in this setting oracle bounds for the classification error on the new data set.
[ 0, 0, 1, 1, 0, 0 ]
[ "Statistics", "Computer Science" ]
Title: The Salesman's Improved Tours for Fundamental Classes, Abstract: Finding the exact integrality gap $\alpha$ for the LP relaxation of the metric Travelling Salesman Problem (TSP) has been an open problem for over thirty years, with little progress made. It is known that $4/3 \leq \alpha \leq 3/2$, and a famous conjecture states $\alpha = 4/3$. For this problem, essentially two "fundamental" classes of instances have been proposed. This fundamental property means that in order to show that the integrality gap is at most $\rho$ for all instances of metric TSP, it is sufficient to show it only for the instances in the fundamental class. However, despite the importance and the simplicity of such classes, no apparent effort has been deployed for improving the integrality gap bounds for them. In this paper we take a natural first step in this endeavour, and consider the $1/2$-integer points of one such class. We successfully improve the upper bound for the integrality gap from $3/2$ to $10/7$ for a superclass of these points, as well as prove a lower bound of $4/3$ for the superclass. Our methods involve innovative applications of tools from combinatorial optimization which have the potential to be more broadly applied.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Rovibrational optical cooling of a molecular beam, Abstract: Cooling the rotation and the vibration of molecules by broadband light sources was possible for trapped molecular ions or ultracold molecules. Because of a low power spectral density, the cooling timescale has never fell below than a few milliseconds. Here we report on rotational and vibrational cooling of a supersonic beam of barium monofluoride molecules in less than 440 $\mu$s. Vibrational cooling was optimized by enhancing the spectral power density of a semiconductor light source at the underlying molecular transitions allowing us to transfer all the populations of $v''=1-3$ into the vibrational ground state ($v''=0$). Rotational cooling, that requires an efficient vibrational pumping, was then achieved. According to a Boltzmann fit, the rotation temperature was reduced by almost a factor of 10. In this fashion, the population of the lowest rotational levels increased by more than one order of magnitude.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Approximation Algorithms for Rectangle Packing Problems (PhD Thesis), Abstract: In rectangle packing problems we are given the task of placing axis-aligned rectangles in a given plane region, so that they do not overlap with each other. In Maximum Weight Independent Set of Rectangles (MWISR), their position is given and we can only select which rectangles to choose, while trying to maximize their total weight. In Strip Packing (SP), we have to pack all the given rectangles in a rectangular region of fixed width, while minimizing its height. In 2-Dimensional Geometric Knapsack (2DGK), the target region is a square of a given size, and our goal is to select and pack a subset of the given rectangles of maximum weight. We study a generalization of MWISR and use it to improve the approximation for a resource allocation problem called bagUFP. We revisit some classical results on SP and 2DGK, by proposing a framework based on smaller containers that are packed with simpler rules; while variations of this scheme are indeed a standard technique in this area, we abstract away some of the problem-specific differences, obtaining simpler algorithms that work for different problems. We obtain improved approximations for SP in pseudo-polynomial time, and for a variant of 2DGK where one can to rotate the rectangles by 90°. For the latter, we propose the first algorithms with approximation factor better than 2. For the main variant of 2DGK (without rotations), a container-based approach seems to face a natural barrier of 2 in the approximation factor. Thus, we consider a generalized kind of packing that combines container packings with another packing problem that we call L-packing problem, where we have to pack rectangles in an L-shaped region of the plane. By finding a (1 + {\epsilon})-approximation for this problem and exploiting the combinatorial structure of 2DGK, we obtain the first algorithms that break the barrier of 2 for the approximation factor of this problem.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Exponential Bounds for the Erdős-Ginzburg-Ziv Constant, Abstract: The Erdős-Ginzburg-Ziv constant of an abelian group $G$, denoted $\mathfrak{s}(G)$, is the smallest $k\in\mathbb{N}$ such that any sequence of elements of $G$ of length $k$ contains a zero-sum subsequence of length $\exp(G)$. In this paper, we use the partition rank, which generalizes the slice rank, to prove that for any odd prime $p$, \[ \mathfrak{s}\left(\mathbb{F}_{p}^{n}\right)\leq(p-1)2^{p}\left(J(p)\cdot p\right)^{n} \] where $0.8414<J(p)<0.91837$ is the constant appearing in Ellenberg and Gijswijt's bound on arithmetic progression-free subsets of $\mathbb{F}_{p}^{n}$. For large $n$, and $p>3$, this is the first exponential improvement to the trivial bound. We also provide a near optimal result conditional on the conjecture that $\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}$ satisfies property $D$, showing that in this case \[ \mathfrak{s}\left(\left(\mathbb{Z}/k\mathbb{Z}\right)^{n}\right)\leq(k-1)4^{n}+k. \]
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Thermal physics of the inner coma: ALMA studies of the methanol distribution and excitation in comet C/2012 K1 (PanSTARRS), Abstract: We present spatially and spectrally-resolved observations of CH$_3$OH emission from comet C/2012 K1 (PanSTARRS) using The Atacama Large Millimeter/submillimeter Array (ALMA) on 2014 June 28-29. Two-dimensional maps of the line-of-sight average rotational temperature ($T_{rot}$) were derived, covering spatial scales $0.3''-1.8''$ (corresponding to sky-projected distances $\rho\sim500$-2500 km). The CH$_3$OH column density distributions are consistent with isotropic, uniform outflow from the nucleus, with no evidence for extended sources of CH$_3$OH in the coma. The $T_{rot}(\rho)$ radial profiles show a significant drop within a few thousand kilometers of the nucleus, falling from about 60 K to 20 K between $\rho=0$ and 2500 km on June 28, whereas on June 29, $T_{rot}$ fell from about 120 K to 40 K between $\rho=$ 0 km and 1000 km. The observed $T_{rot}$ behavior is interpreted primarily as a result of variations in the coma kinetic temperature due to adiabatic cooling of the outflowing gas, as well as radiative cooling of the CH$_3$OH rotational levels. Our excitation model shows that radiative cooling is more important for the $J=7-6$ transitions (at 338 GHz) than for the $K=3-2$ transitions (at 252 GHz), resulting in a strongly sub-thermal distribution of levels in the $J=7-6$ band at $\rho\gtrsim1000$ km. For both bands, the observed temperature drop with distance is less steep than predicted by standard coma theoretical models, which suggests the presence of a significant source of heating in addition to the photolytic heat sources usually considered.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Fixed-Gain Augmented-State Tracking-Filters, Abstract: A procedure for the design of fixed-gain tracking filters, using an augmented-state observer with signal and interference subspaces, is proposed. The signal subspace incorporates an integrating Newtonian model and a second-order maneuver model that is matched to a sustained constant-g turn; the deterministic interference model creates a Nyquist null for smoother track estimates. The selected models provide a simple means of shaping and analyzing the (transient and steady-state) response of tracking-filters of elevated order.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Collective Sedimentation of Squirmers under Gravity, Abstract: Active particles, which interact hydrodynamically, display a remarkable variety of emergent collective phenomena. We use squirmers to model spherical microswimmers and explore the collective behavior of thousands of them under the influence of strong gravity using the method of multi-particle collision dynamics for simulating fluid flow. The sedimentation profile depends on the ratio of swimming to sedimentation velocity as well as on the squirmer type. It shows close packed squirmer layers at the bottom and a highly dynamic region with exponential density dependence towards the top. The mean vertical orientation of the squirmers strongly depends on height. For swimming velocities larger than the sedimentation velocity, squirmers show strong convection in the exponential region. We quantify the strength of convection and the extent of convection cells by the vertical current density and its current dipole, which are large for neutral squirmers as well as for weak pushers and pullers.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Quantitative Biology" ]
Title: Confidence Intervals and Hypothesis Testing for the Permutation Entropy with an application to Epilepsy, Abstract: In nonlinear dynamics, and to a lesser extent in other fields, a widely used measure of complexity is the Permutation Entropy. But there is still no known method to determine the accuracy of this measure. There has been little research on the statistical properties of this quantity that characterize time series. The literature describes some resampling methods of quantities used in nonlinear dynamics - as the largest Lyapunov exponent - but all of these seems to fail. In this contribution we propose a parametric bootstrap methodology using a symbolic representation of the time series in order to obtain the distribution of the Permutation Entropy estimator. We perform several time series simulations given by well known stochastic processes: the 1=f? noise family, and show in each case that the proposed accuracy measure is as efficient as the one obtained by the frequentist approach of repeating the experiment. The complexity of brain electrical activity, measured by the Permutation Entropy, has been extensively used in epilepsy research for detection in dynamical changes in electroencephalogram (EEG) signal with no consideration of the variability of this complexity measure. An application of the parametric bootstrap methodology is used to compare normal and pre-ictal EEG signals.
[ 0, 1, 0, 1, 0, 0 ]
[ "Statistics", "Quantitative Biology" ]
Title: Feature overwriting as a finite mixture process: Evidence from comprehension data, Abstract: The ungrammatical sentence "The key to the cabinets are on the table" is known to lead to an illusion of grammaticality. As discussed in the meta-analysis by Jaeger et al., 2017, faster reading times are observed at the verb are in the agreement-attraction sentence above compared to the equally ungrammatical sentence "The key to the cabinet are on the table". One explanation for this facilitation effect is the feature percolation account: the plural feature on cabinets percolates up to the head noun key, leading to the illusion. An alternative account is in terms of cue-based retrieval (Lewis & Vasishth, 2005), which assumes that the non-subject noun cabinets is misretrieved due to a partial feature-match when a dependency completion process at the auxiliary initiates a memory access for a subject with plural marking. We present evidence for yet another explanation for the observed facilitation. Because the second sentence has two nouns with identical number, it is possible that these are, in some proportion of trials, more difficult to keep distinct, leading to slower reading times at the verb in the first sentence above; this is the feature overwriting account of Nairne, 1990. We show that the feature overwriting proposal can be implemented as a finite mixture process. We reanalysed ten published data-sets, fitting hierarchical Bayesian mixture models to these data assuming a two-mixture distribution. We show that in nine out of the ten studies, a mixture distribution corresponding to feature overwriting furnishes a superior fit over both the feature percolation and the cue-based retrieval accounts.
[ 1, 0, 0, 1, 0, 0 ]
[ "Statistics", "Quantitative Biology" ]
Title: Learning a Unified Control Policy for Safe Falling, Abstract: Being able to fall safely is a necessary motor skill for humanoids performing highly dynamic tasks, such as running and jumping. We propose a new method to learn a policy that minimizes the maximal impulse during the fall. The optimization solves for both a discrete contact planning problem and a continuous optimal control problem. Once trained, the policy can compute the optimal next contacting body part (e.g. left foot, right foot, or hands), contact location and timing, and the required joint actuation. We represent the policy as a mixture of actor-critic neural network, which consists of n control policies and the corresponding value functions. Each pair of actor-critic is associated with one of the n possible contacting body parts. During execution, the policy corresponding to the highest value function will be executed while the associated body part will be the next contact with the ground. With this mixture of actor-critic architecture, the discrete contact sequence planning is solved through the selection of the best critics while the continuous control problem is solved by the optimization of actors. We show that our policy can achieve comparable, sometimes even higher, rewards than a recursive search of the action space using dynamic programming, while enjoying 50 to 400 times of speed gain during online execution.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Collisional Dynamics of Solitons in the Coupled PT symmetric Nonlocal nonlinear Schrodinger equations, Abstract: We investigate the focusing coupled PT-symmetric nonlocal nonlinear Schrodinger equation employing Darboux transformation approach. We find a family of exact solutions including pairs of Bright-Bright, Dark-Dark and Bright-Dark solitons in addition to solitary waves. We show that one can convert bright bound state onto a dark bound state in a two-soliton solution by selectively fine tuning the amplitude dependent parameter. We also show that the energy in each mode remains conserved unlike the celebrated Manakov model. We also characterize the behaviour of the soliton solutions in detail. We emphasize that the above phenomenon occurs due to the nonlocality of the model.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Constructions and classifications of projective Poisson varieties, Abstract: This paper is intended both an introduction to the algebraic geometry of holomorphic Poisson brackets, and as a survey of results on the classification of projective Poisson manifolds that have been obtained in the past twenty years. It is based on the lecture series delivered by the author at the Poisson 2016 Summer School in Geneva. The paper begins with a detailed treatment of Poisson surfaces, including adjunction, ruled surfaces and blowups, and leading to a statement of the full birational classification. We then describe several constructions of Poisson threefolds, outlining the classification in the regular case, and the case of rank-one Fano threefolds (such as projective space). Following a brief introduction to the notion of Poisson subspaces, we discuss Bondal's conjecture on the dimensions of degeneracy loci on Poisson Fano manifolds. We close with a discussion of log symplectic manifolds with simple normal crossings degeneracy divisor, including a new proof of the classification in the case of rank-one Fano manifolds.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Generalised Reichenbachian Common Cause Systems, Abstract: The principle of the common cause claims that if an improbable coincidence has occurred, there must exist a common cause. This is generally taken to mean that positive correlations between non-causally related events should disappear when conditioning on the action of some underlying common cause. The extended interpretation of the principle, by contrast, urges that common causes should be called for in order to explain positive deviations between the estimated correlation of two events and the expected value of their correlation. The aim of this paper is to provide the extended reading of the principle with a general probabilistic model, capturing the simultaneous action of a system of multiple common causes. To this end, two distinct models are elaborated, and the necessary and sufficient conditions for their existence are determined.
[ 1, 0, 0, 1, 0, 0 ]
[ "Mathematics", "Statistics" ]
Title: Markov Models for Health Economic Evaluations: The R Package heemod, Abstract: Health economic evaluation studies are widely used in public health to assess health strategies in terms of their cost-effectiveness and inform public policies. We developed an R package for Markov models implementing most of the modelling and reporting features described in reference textbooks and guidelines: deterministic and probabilistic sensitivity analysis, heterogeneity analysis, time dependency on state-time and model-time (semi-Markov and non-homogeneous Markov models), etc. In this paper we illustrate the features of heemod by building and analysing an example Markov model. We then explain the design and the underlying implementation of the package.
[ 0, 0, 0, 1, 0, 0 ]
[ "Statistics", "Quantitative Biology" ]
Title: Amorphous Alloys, Degradation Performance of Azo Dyes: Review, Abstract: Today freshwater is more important than ever before and it is contaminated from textile industry. Removal of dyes from effluent of textile using amorphous alloys has been studied extensively by many researchers. In this review article it is presented up to date development on the azo dye degradation performance of amorphous alloys, a new class of catalytic materials. Numerous amorphous alloys have been developed for increasing higher degradation efficiency in comparison to conventional ones for the removal of azo dyes in wastewater. One of the objectives of this review article is to organize the scattered available information on various aspects on a wide range of potentially effective in the removal of dyes by using amorphous alloys. This study comprises the affective removal factors of azo dye such as solution pH, initial dye concentration, and adsorbent dosage. It was concluded that Fe, Mg, Co, Al and Mn-based amorphous alloys with wide availability have appreciable for removing several types of azo dyes from wastewater. Concerning amorphous alloys for future research, some suggestions are proposed and conclusions have been drawn.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Reentrant Phase Coherence in Superconducting Nanowire Composites, Abstract: The short coherence lengths characteristic of low-dimensional superconductors are associated with usefully high critical fields or temperatures. Unfortunately, such materials are often sensitive to disorder and suffer from phase fluctuations in the superconducting order parameter which diverge with temperature $T$, magnetic field $H$ or current $I$. We propose an approach to overcome synthesis and fluctuation problems: building superconductors from inhomogeneous composites of nanofilaments. Macroscopic crystals of quasi-one-dimensional Na$_{2-\delta}$Mo$_6$Se$_6$ featuring Na vacancy disorder ($\delta\approx$~0.2) are shown to behave as percolative networks of superconducting nanowires. Long range order is established via transverse coupling between individual one-dimensional filaments, yet phase coherence remains unstable to fluctuations and localization in the zero-($T$,$H$,$I$) limit. However, a region of reentrant phase coherence develops upon raising ($T$,$H$,$I$). We attribute this phenomenon to an enhancement of the transverse coupling due to electron delocalization. Our observations of reentrant phase coherence coincide with a peak in the Josephson energy $E_J$ at non-zero ($T$,$H$,$I$), which we estimate using a simple analytical model for a disordered anisotropic superconductor. Na$_{2-\delta}$Mo$_6$Se$_6$ is therefore a blueprint for a future generation of nanofilamentary superconductors with inbuilt resilience to phase fluctuations at elevated ($T$,$H$,$I$).
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Semi-parametric Dynamic Asymmetric Laplace Models for Tail Risk Forecasting, Incorporating Realized Measures, Abstract: The joint Value at Risk (VaR) and expected shortfall (ES) quantile regression model of Taylor (2017) is extended via incorporating a realized measure, to drive the tail risk dynamics, as a potentially more efficient driver than daily returns. Both a maximum likelihood and an adaptive Bayesian Markov Chain Monte Carlo method are employed for estimation, whose properties are assessed and compared via a simulation study; results favour the Bayesian approach, which is subsequently employed in a forecasting study of seven market indices and two individual assets. The proposed models are compared to a range of parametric, non-parametric and semi-parametric models, including GARCH, Realized-GARCH and the joint VaR and ES quantile regression models in Taylor (2017). The comparison is in terms of accuracy of one-day-ahead Value-at-Risk and Expected Shortfall forecasts, over a long forecast sample period that includes the global financial crisis in 2007-2008. The results favor the proposed models incorporating a realized measure, especially when employing the sub-sampled Realized Variance and the sub-sampled Realized Range.
[ 0, 0, 0, 0, 0, 1 ]
[ "Statistics", "Quantitative Finance" ]
Title: Extended B-Spline Collocation Method For KdV-Burgers Equation, Abstract: The extended form of the classical polynomial cubic B-spline function is used to set up a collocation method for some initial boundary value problems derived for the Korteweg-de Vries-Burgers equation. Having nonexistence of third order derivatives of the cubic B-splines forces us to reduce the order of the term uxxx to give a coupled system of equations. The space discretization of this system is accomplished by the collocation method following the time discretization with Crank-Nicolson method. Two initial boundary value problems, one having analytical solution and the other is set up with a non analytical initial condition, have been simulated by the proposed method.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Detection of methylisocyanate (CH3NCO) in a solar-type protostar, Abstract: We report the detection of the prebiotic molecule CH3NCO in a solar-type protostar, IRAS16293-2422 B. A significant abundance of this species on the surface of the comet 67P/Churyumov-Gerasimenko has been proposed, and it has recently been detected in hot cores around high-mass protostars. We observed IRAS16293-2422 B with ALMA in the 90 GHz to 265 GHz range, and detected 8 unblended transitions of CH3NCO. From our Local Thermodynamic Equilibrium analysis we derived an excitation temperature of 110+-19 K and a column density of (4.0+-0.3)x10^15 cm^-2 , which results in an abundance of <=(1.4+-0.1)x10^-10 with respect to molecular hydrogen. This implies a CH3NCO/HNCO and CH3NCO/NH2CHO column density ratios of ~0.08. Our modelling of the chemistry of CH3NCO suggests that both ice surface and gas phase formation reactions of this molecule are needed to explain the observations.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Maximum Number of Common Zeros of Homogeneous Polynomials over Finite Fields, Abstract: About two decades ago, Tsfasman and Boguslavsky conjectured a formula for the maximum number of common zeros that $r$ linearly independent homogeneous polynomials of degree $d$ in $m+1$ variables with coefficients in a finite field with $q$ elements can have in the corresponding $m$-dimensional projective space. Recently, it has been shown by Datta and Ghorpade that this conjecture is valid if $r$ is at most $m+1$ and can be invalid otherwise. Moreover a new conjecture was proposed for many values of $r$ beyond $m+1$. In this paper, we prove that this new conjecture holds true for several values of $r$. In particular, this settles the new conjecture completely when $d=3$. Our result also includes the positive result of Datta and Ghorpade as a special case. Further, we determine the maximum number of zeros in certain cases not covered by the earlier conjectures and results, namely, the case of $d=q-1$ and of $d=q$. All these results are directly applicable to the determination of the maximum number of points on sections of Veronese varieties by linear subvarieties of a fixed dimension, and also the determination of generalized Hamming weights of projective Reed-Muller codes.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Transport of Intensity Equation Microscopy for Dynamic Microtubules, Abstract: Microtubules (MTs) are filamentous protein polymers roughly 25 nm in diameter. Ubiquitous in eukaryotes, MTs are well known for their structural role but also act as actuators, sensors, and, in association with other proteins, checkpoint regulators. The thin diameter and transparency of microtubules classifies them as sub-resolution phase objects, with concomitant imaging challenges. Label-free methods for imaging microtubules are preferred when long exposure times would lead to phototoxicity in fluorescence, or for retaining more native structure and activity. This method approaches quantitative phase imaging of MTs as an inverse problem based on the Transport of Intensity Equation. In a co-registered comparison of MT signal-to-background-noise ratio, TIE Microscopy of MTs shows an improvement of more than three times that of video-enhanced bright field imaging. This method avoids the anisotropy caused by prisms used in differential interference contrast and takes only two defocused images as input. Unlike other label-free techniques for imaging microtubules, in TIE microscopy background removal is a natural consequence of taking the difference of two defocused images, so the need to frequently update a background image is eliminated.
[ 0, 1, 0, 0, 0, 0 ]
[ "Quantitative Biology", "Physics" ]
Title: Mathematical Programming formulations for the efficient solution of the $k$-sum approval voting problem, Abstract: In this paper we address the problem of electing a committee among a set of $m$ candidates and on the basis of the preferences of a set of $n$ voters. We consider the approval voting method in which each voter can approve as many candidates as she/he likes by expressing a preference profile (boolean $m$-vector). In order to elect a committee, a voting rule must be established to `transform' the $n$ voters' profiles into a winning committee. The problem is widely studied in voting theory; for a variety of voting rules the problem was shown to be computationally difficult and approximation algorithms and heuristic techniques were proposed in the literature. In this paper we follow an Ordered Weighted Averaging approach and study the $k$-sum approval voting (optimization) problem in the general case $1 \leq k <n$. For this problem we provide different mathematical programming formulations that allow us to solve it in an exact solution framework. We provide computational results showing that our approach is efficient for medium-size test problems ($n$ up to 200, $m$ up to 60) since in all tested cases it was able to find the exact optimal solution in very short computational times.
[ 1, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Computer Science" ]
Title: DAGs with NO TEARS: Continuous Optimization for Structure Learning, Abstract: Estimating the structure of directed acyclic graphs (DAGs, also known as Bayesian networks) is a challenging problem since the search space of DAGs is combinatorial and scales superexponentially with the number of nodes. Existing approaches rely on various local heuristics for enforcing the acyclicity constraint. In this paper, we introduce a fundamentally different strategy: We formulate the structure learning problem as a purely \emph{continuous} optimization problem over real matrices that avoids this combinatorial constraint entirely. This is achieved by a novel characterization of acyclicity that is not only smooth but also exact. The resulting problem can be efficiently solved by standard numerical algorithms, which also makes implementation effortless. The proposed method outperforms existing ones, without imposing any structural assumptions on the graph such as bounded treewidth or in-degree. Code implementing the proposed algorithm is open-source and publicly available at this https URL.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Kinky DNA in solution: Small angle scattering study of a nucleosome positioning sequence, Abstract: DNA is a flexible molecule, but the degree of its flexibility is subject to debate. The commonly-accepted persistence length of $l_p \approx 500\,$\AA\ is inconsistent with recent studies on short-chain DNA that show much greater flexibility but do not probe its origin. We have performed X-ray and neutron small-angle scattering on a short DNA sequence containing a strong nucleosome positioning element, and analyzed the results using a modified Kratky-Porod model to determine possible conformations. Our results support a hypothesis from Crick and Klug in 1975 that some DNA sequences in solution can have sharp kinks, potentially resolving the discrepancy. Our conclusions are supported by measurements on a radiation-damaged sample, where single-strand breaks lead to increased flexibility and by an analysis of data from another sequence, which does not have kinks, but where our method can detect a locally enhanced flexibility due to an $AT$-domain.
[ 0, 0, 0, 0, 1, 0 ]
[ "Physics", "Quantitative Biology" ]
Title: Normalized Direction-preserving Adam, Abstract: Adaptive optimization algorithms, such as Adam and RMSprop, have shown better optimization performance than stochastic gradient descent (SGD) in some scenarios. However, recent studies show that they often lead to worse generalization performance than SGD, especially for training deep neural networks (DNNs). In this work, we identify the reasons that Adam generalizes worse than SGD, and develop a variant of Adam to eliminate the generalization gap. The proposed method, normalized direction-preserving Adam (ND-Adam), enables more precise control of the direction and step size for updating weight vectors, leading to significantly improved generalization performance. Following a similar rationale, we further improve the generalization performance in classification tasks by regularizing the softmax logits. By bridging the gap between SGD and Adam, we also hope to shed light on why certain optimization algorithms generalize better than others.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Learning Deep CNN Denoiser Prior for Image Restoration, Abstract: Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance; in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Phonon-assisted oscillatory exciton dynamics in monolayer MoSe2, Abstract: In monolayer semiconductor transition metal dichalcogenides, the exciton-phonon interaction is expected to strongly affect the photocarrier dynamics. Here, we report on an unusual oscillatory enhancement of the neutral exciton photoluminescence with the excitation laser frequency in monolayer MoSe2. The frequency of oscillation matches that of the M-point longitudinal acoustic phonon, LA(M). Oscillatory behavior is also observed in the steady-state emission linewidth and in timeresolved photoluminescence excitation data, which reveals variation with excitation energy in the exciton lifetime. These results clearly expose the key role played by phonons in the exciton formation and relaxation dynamics of two-dimensional van der Waals semiconductors.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Ground state degeneracy in quantum spin systems protected by crystal symmetries, Abstract: We develop a no-go theorem for two-dimensional bosonic systems with crystal symmetries: if there is a half-integer spin at a rotation center, where the point-group symmetry is $\mathbb D_{2,4,6}$, such a system must have a ground-state degeneracy protected by the crystal symmetry. Such a degeneracy indicates either a broken-symmetry state or a unconventional state of matter. Comparing to the Lieb-Schultz-Mattis Theorem, our result counts the spin at each rotation center, instead of the total spin per unit cell, and therefore also applies to certain systems with an even number of half-integer spins per unit cell.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Forecasting day-ahead electricity prices in Europe: the importance of considering market integration, Abstract: Motivated by the increasing integration among electricity markets, in this paper we propose two different methods to incorporate market integration in electricity price forecasting and to improve the predictive performance. First, we propose a deep neural network that considers features from connected markets to improve the predictive accuracy in a local market. To measure the importance of these features, we propose a novel feature selection algorithm that, by using Bayesian optimization and functional analysis of variance, evaluates the effect of the features on the algorithm performance. In addition, using market integration, we propose a second model that, by simultaneously predicting prices from two markets, improves the forecasting accuracy even further. As a case study, we consider the electricity market in Belgium and the improvements in forecasting accuracy when using various French electricity features. We show that the two proposed models lead to improvements that are statistically significant. Particularly, due to market integration, the predictive accuracy is improved from 15.7% to 12.5% sMAPE (symmetric mean absolute percentage error). In addition, we show that the proposed feature selection algorithm is able to perform a correct assessment, i.e. to discard the irrelevant features.
[ 1, 0, 0, 1, 0, 0 ]
[ "Statistics", "Quantitative Finance" ]
Title: AADS: Augmented Autonomous Driving Simulation using Data-driven Algorithms, Abstract: Simulation systems have become an essential component in the development and validation of autonomous driving technologies. The prevailing state-of-the-art approach for simulation is to use game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (e.g., the assets for simulation) remains a manual task that can be costly and time-consuming. In addition, the fidelity of CG images still lacks the richness and authenticity of real-world images and using these images for training leads to degraded performance. In this paper we present a novel approach to address these issues: Augmented Autonomous Driving Simulation (AADS). Our formulation augments real-world pictures with a simulated traffic flow to create photo-realistic simulation images and renderings. More specifically, we use LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generate highly plausible traffic flows for cars and pedestrians and compose them into the background. The composite images can be re-synthesized with different viewpoints and sensor models. The resulting images are photo-realistic, fully annotated, and ready for end-to-end training and testing of autonomous driving systems from perception to planning. We explain our system design and validate our algorithms with a number of autonomous driving tasks from detection to segmentation and predictions. Compared to traditional approaches, our method offers unmatched scalability and realism. Scalability is particularly important for AD simulation and we believe the complexity and diversity of the real world cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility in a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation of anywhere on earth.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Scheme-theoretic Whitney conditions and applications to tangency of projective varieties, Abstract: We investigate a scheme-theoretic variant of Whitney condition a. If X is a projec-tive variety over the field of complex numbers and Y $\subset$ X a subvariety, then X satisfies generically the scheme-theoretic Whitney condition a along Y provided that the pro-jective dual of X is smooth. We give applications to tangency of projective varieties over C and to convex real algebraic geometry. In particular, we prove a Bertini-type theorem for osculating plane of smooth complex space curves and a generalization of a Theorem of Ranestad and Sturmfels describing the algebraic boundary of an affine compact real variety.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Electron paramagnetic resonance g-tensors from state interaction spin-orbit coupling density matrix renormalization group, Abstract: We present a state interaction spin-orbit coupling method to calculate electron paramagnetic resonance (EPR) $g$-tensors from density matrix renormalization group wavefunctions. We apply the technique to compute $g$-tensors for the \ce{TiF3} and \ce{CuCl4^2-} complexes, a [2Fe-2S] model of the active center of ferredoxins, and a \ce{Mn4CaO5} model of the S2 state of the oxygen evolving complex. These calculations raise the prospects of determining $g$-tensors in multireference calculations with a large number of open shells.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Chemistry" ]
Title: Thinking Fast and Slow with Deep Learning and Tree Search, Abstract: Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans. In this paper, we present Expert Iteration (ExIt), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalisation tasks. Planning new policies is performed by tree search, while a deep neural network generalises those plans. Subsequently, tree search is improved by using the neural network policy to guide search, increasing the strength of new plans. In contrast, standard deep Reinforcement Learning algorithms rely on a neural network not only to generalise plans, but to discover them too. We show that ExIt outperforms REINFORCE for training a neural network to play the board game Hex, and our final tree search agent, trained tabula rasa, defeats MoHex 1.0, the most recent Olympiad Champion player to be publicly released.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Improved Bayesian Compression, Abstract: Compression of Neural Networks (NN) has become a highly studied topic in recent years. The main reason for this is the demand for industrial scale usage of NNs such as deploying them on mobile devices, storing them efficiently, transmitting them via band-limited channels and most importantly doing inference at scale. In this work, we propose to join the Soft-Weight Sharing and Variational Dropout approaches that show strong results to define a new state-of-the-art in terms of model compression.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Ergodicity of spherically symmetric fluid flows outside of a Schwarzschild black hole with random boundary forcing, Abstract: We consider the Burgers equation posed on the outer communication region of a Schwarzschild black hole spacetime. Assuming spherical symmetry for the fluid flow under consideration, we study the propagation and interaction of shock waves under the effect of random forcing. First of all, considering the initial and boundary value problem with boundary data prescribed in the vicinity of the horizon, we establish a generalization of the Hopf--Lax--Oleinik formula, which takes the curved geometry into account and allows us to establish the existence of bounded variation solutions. To this end, we analyze the global behavior of the characteristic curves in the Schwarzschild geometry, including their behavior near the black hole horizon. In a second part, we investigate the long-term statistical properties of solutions when a random forcing is imposed near the black hole horizon and study the ergodicity of the fluid flow under consideration. We prove the existence of a random global attractor and, for the Burgers equation outside of a Schwarzschild black hole, we are able to validate the so-called `one-force-one-solution' principle. Furthermore, all of our results are also established for a pressureless Euler model which consists of two balance laws and includes a transport equation satisfied by the integrated fluid density.
[ 0, 0, 1, 0, 0, 0 ]
[ "Physics", "Mathematics" ]
Title: Concepts of Architecture, Structure and System, Abstract: The current ISO standards pertaining to the Concepts of System and Architecture express succinct definitions of these two key terms that lend themselves to practical application and can be understood through elementary mathematical foundations. The current work of the ISO/IEC Working Group 42 is seeking to refine and elaborate the existing standards. This position paper revisits the fundamental concepts underlying both of these key terms and offers an approach to: (i) refine and exemplify the term 'fundamental concepts' in the current ISO definition of Architecture, (ii) exploit existing standards for the term 'concept', and (iii) introduce a new concept, Architectural Structure, that can serve to unify the current terminology at a fundamental level. Precise elementary examples are used in to conceptualise the approach offered.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Densities of Hyperbolic Cusp Invariants, Abstract: We find that cusp densities of hyperbolic knots in the 3-sphere are dense in [0,0.6826...] and those of links are dense in [0,0.853...]. We define a new invariant associated with cusp volume, the cusp crossing density, as the ratio between the cusp volume and the crossing number of a link, and show that cusp crossing density for links is bounded above by 3.1263.... Moreover, there is a sequence of links with cusp crossing density approaching 3. The least upper bound for cusp crossing density remains an open question. For two-component hyperbolic links, cusp crossing density is shown to be dense in the interval [0,1.6923...] and for all hyperbolic links, cusp crossing density is shown to be dense in [0, 2.120...].
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Adaptive Algebraic Multiscale Solver for Compressible Flow in Heterogeneous Porous Media, Abstract: This paper presents the development of an Adaptive Algebraic Multiscale Solver for Compressible flow (C-AMS) in heterogeneous porous media. Similar to the recently developed AMS for incompressible (linear) flows [Wang et al., JCP, 2014], C-AMS operates by defining primal and dual-coarse blocks on top of the fine-scale grid. These coarse grids facilitate the construction of a conservative (finite volume) coarse-scale system and the computation of local basis functions, respectively. However, unlike the incompressible (elliptic) case, the choice of equations to solve for basis functions in compressible problems is not trivial. Therefore, several basis function formulations (incompressible and compressible, with and without accumulation) are considered in order to construct an efficient multiscale prolongation operator. As for the restriction operator, C-AMS allows for both multiscale finite volume (MSFV) and finite element (MSFE) methods. Finally, in order to resolve high-frequency errors, fine-scale (pre- and post-) smoother stages are employed. In order to reduce computational expense, the C-AMS operators (prolongation, restriction, and smoothers) are updated adaptively. In addition to this, the linear system in the Newton-Raphson loop is infrequently updated. Systematic numerical experiments are performed to determine the effect of the various options, outlined above, on the C-AMS convergence behaviour. An efficient C-AMS strategy for heterogeneous 3D compressible problems is developed based on overall CPU times. Finally, C-AMS is compared against an industrial-grade Algebraic MultiGrid (AMG) solver. Results of this comparison illustrate that the C-AMS is quite efficient as a nonlinear solver, even when iterated to machine accuracy.
[ 1, 1, 0, 0, 0, 0 ]
[ "Mathematics", "Physics", "Computer Science" ]
Title: Controlled trapping of single particle states on a periodic substrate by deterministic stubbing, Abstract: A periodic array of atomic sites, described within a tight binding formalism is shown to be capable of trapping electronic states as it grows in size and gets stubbed by an atom or an atomic clusters from a side in a deterministic way. We prescribe a method based on a real space renormalization group method, that unravels a subtle correlation between the positions of the side coupled atoms and the energy eigenvalues for which the incoming particle finally gets trapped. We discuss how, in such conditions, the periodic backbone gets transformed into an array of infinite quantum wells in the thermodynamic limit. We present a case here, where the wells have a hierarchically distribution of widths, hosing standing wave solutions in the thermodynamic limit.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Adaptive Noise Cancellation Using Deep Cerebellar Model Articulation Controller, Abstract: This paper proposes a deep cerebellar model articulation controller (DCMAC) for adaptive noise cancellation (ANC). We expand upon the conventional CMAC by stacking sin-gle-layer CMAC models into multiple layers to form a DCMAC model and derive a modified backpropagation training algorithm to learn the DCMAC parameters. Com-pared with conventional CMAC, the DCMAC can characterize nonlinear transformations more effectively because of its deep structure. Experimental results confirm that the pro-posed DCMAC model outperforms the CMAC in terms of residual noise in an ANC task, showing that DCMAC provides enhanced modeling capability based on channel characteristics.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Portfolio Construction Matters, Abstract: The role of portfolio construction in the implementation of equity market neutral factors is often underestimated. Taking the classical momentum strategy as an example, we show that one can significantly improve the main strategy's features by properly taking care of this key step. More precisely, an optimized portfolio construction algorithm allows one to significantly improve the Sharpe Ratio, reduce sector exposures and volatility fluctuations, and mitigate the strategy's skewness and tail correlation with the market. These results are supported by long-term, world-wide simulations and will be shown to be universal. Our findings are quite general and hold true for a number of other "equity factors". Finally, we discuss the details of a more realistic set-up where we also deal with transaction costs.
[ 0, 0, 0, 0, 0, 1 ]
[ "Quantitative Finance" ]
Title: Function space analysis of deep learning representation layers, Abstract: In this paper we propose a function space approach to Representation Learning and the analysis of the representation layers in deep learning architectures. We show how to compute a weak-type Besov smoothness index that quantifies the geometry of the clustering in the feature space. This approach was already applied successfully to improve the performance of machine learning algorithms such as the Random Forest and tree-based Gradient Boosting. Our experiments demonstrate that in well-known and well-performing trained networks, the Besov smoothness of the training set, measured in the corresponding hidden layer feature map representation, increases from layer to layer. We also contribute to the understanding of generalization by showing how the Besov smoothness of the representations, decreases as we add more mis-labeling to the training data. We hope this approach will contribute to the de-mystification of some aspects of deep learning.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Random Perturbations of Matrix Polynomials, Abstract: A sum of a large-dimensional random matrix polynomial and a fixed low-rank matrix polynomial is considered. The main assumption is that the resolvent of the random polynomial converges to some deterministic limit. A formula for the limit of the resolvent of the sum is derived and the eigenvalues are localised. Three instances are considered: a low-rank matrix perturbed by the Wigner matrix, a product $HX$ of a fixed diagonal matrix $H$ and the Wigner matrix $X$ and a special matrix polynomial. The results are illustrated with various examples and numerical simulations.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics", "Statistics" ]
Title: Monitoring Information Quality within Web Service Composition and Execution, Abstract: The composition of web services is a promising approach enabling flexible and loose integration of business applications. Numerous approaches related to web services composition have been developed usually following three main phases: the service discovery is based on the semantic description of advertised services, i.e. the functionality of the service, meanwhile the service selection is based on non- functional quality dimensions of service, and finally the service composition aims to support an underlying process. Most of those approaches explore techniques of static or dynamic design for an optimal service composition. One important aspect so far is mostly neglected, focusing on the output produced of composite web services. In this paper, in contrast to many prominent approaches we introduce a data quality perspective on web services. Based on a data quality management approach, we propose a framework for analyzing data produced by the composite service execution. Utilising process information together with data in service logs, our approach allows identifying problems in service composition and execution. Analyzing the service execution history our approach helps to improve common approaches of service selection and composition.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Nonlinear probability. A theory with incompatible stochastic variables, Abstract: In 1991 J.F. Aarnes introduced the concept of quasi-measures in a compact topological space $\Omega$ and established the connection between quasi-states on $C (\Omega)$ and quasi-measures in $\Omega$. This work solved the linearity problem of quasi-states on $C^*$-algebras formulated by R.V. Kadison in 1965. The answer is that a quasi-state need not be linear, so a quasi-state need not be a state. We introduce nonlinear measures in a space $\Omega$ which is a generalization of a measurable space. In this more general setting we are still able to define integration and establish a representation theorem for the corresponding functionals. A probabilistic language is choosen since we feel that the subject should be of some interest to probabilists. In particular we point out that the theory allows for incompatible stochastic variables. The need for incompatible variables is well known in quantum mechanics, but the need seems natural also in other contexts as we try to explain by a questionary example. Keywords and phrases: Epistemic probability, Integration with respect to mea- sures and other set functions, Banach algebras of continuous functions, Set func- tions and measures on topological spaces, States, Logical foundations of quantum mechanics.
[ 0, 0, 1, 1, 0, 0 ]
[ "Mathematics", "Physics" ]
Title: Computability of semicomputable manifolds in computable topological spaces, Abstract: We study computable topological spaces and semicomputable and computable sets in these spaces. In particular, we investigate conditions under which semicomputable sets are computable. We prove that a semicomputable compact manifold $M$ is computable if its boundary $\partial M$ is computable. We also show how this result combined with certain construction which compactifies a semicomputable set leads to the conclusion that some noncompact semicomputable manifolds in computable metric spaces are computable.
[ 1, 0, 1, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Acoustic streaming and its suppression in inhomogeneous fluids, Abstract: We present a theoretical and experimental study of boundary-driven acoustic streaming in an inhomogeneous fluid with variations in density and compressibility. In a homogeneous fluid this streaming results from dissipation in the boundary layers (Rayleigh streaming). We show that in an inhomogeneous fluid, an additional non-dissipative force density acts on the fluid to stabilize particular inhomogeneity configurations, which markedly alters and even suppresses the streaming flows. Our theoretical and numerical analysis of the phenomenon is supported by ultrasound experiments performed with inhomogeneous aqueous iodixanol solutions in a glass-silicon microchip.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Disruptive Behavior Disorder (DBD) Rating Scale for Georgian Population, Abstract: In the presented study Parent/Teacher Disruptive Behavior Disorder (DBD) rating scale based on the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR [APA, 2000]) which was developed by Pelham and his colleagues (Pelham et al., 1992) was translated and adopted for assessment of childhood behavioral abnormalities, especially ADHD, ODD and CD in Georgian children and adolescents. The DBD rating scale was translated into Georgian language using back translation technique by English language philologists and checked and corrected by qualified psychologists and psychiatrist of Georgia. Children and adolescents in the age range of 6 to 16 years (N 290; Mean Age 10.50, SD=2.88) including 153 males (Mean Age 10.42, SD= 2.62) and 141 females (Mean Age 10.60, SD=3.14) were recruited from different public schools of Tbilisi and the Neurology Department of the Pediatric Clinic of the Tbilisi State Medical University. Participants objectively were assessed via interviewing parents/teachers and qualified psychologists in three different settings including school, home and clinic. In terms of DBD total scores revealed statistically significant differences between healthy controls (M=27.71, SD=17.26) and children and adolescents with ADHD (M=61.51, SD= 22.79). Statistically significant differences were found for inattentive subtype between control (M=8.68, SD=5.68) and ADHD (M=18.15, SD=6.57) groups. In general it was shown that children and adolescents with ADHD had high score on DBD in comparison to typically developed persons. In the study also was determined gender wise prevalence in children and adolescents with ADHD, ODD and CD. The research revealed prevalence of males in comparison with females in all investigated categories.
[ 0, 0, 0, 1, 0, 0 ]
[ "Quantitative Biology" ]
Title: Robustifying Independent Component Analysis by Adjusting for Group-Wise Stationary Noise, Abstract: We introduce coroICA, confounding-robust independent component analysis, a novel ICA algorithm which decomposes linearly mixed multivariate observations into independent components that are corrupted (and rendered dependent) by hidden group-wise stationary confounding. It extends the ordinary ICA model in a theoretically sound and explicit way to incorporate group-wise (or environment-wise) confounding. We show that our general noise model allows to perform ICA in settings where other noisy ICA procedures fail. Additionally, it can be used for applications with grouped data by adjusting for different stationary noise within each group. We show that the noise model has a natural relation to causality and explain how it can be applied in the context of causal inference. In addition to our theoretical framework, we provide an efficient estimation procedure and prove identifiability of the unmixing matrix under mild assumptions. Finally, we illustrate the performance and robustness of our method on simulated data, provide audible and visual examples, and demonstrate the applicability to real-world scenarios by experiments on publicly available Antarctic ice core data as well as two EEG data sets. We provide a scikit-learn compatible pip-installable Python package coroICA as well as R and Matlab implementations accompanied by a documentation at this https URL.
[ 0, 0, 0, 1, 1, 0 ]
[ "Computer Science", "Statistics" ]
Title: Diversification-Based Learning in Computing and Optimization, Abstract: Diversification-Based Learning (DBL) derives from a collection of principles and methods introduced in the field of metaheuristics that have broad applications in computing and optimization. We show that the DBL framework goes significantly beyond that of the more recent Opposition-based learning (OBL) framework introduced in Tizhoosh (2005), which has become the focus of numerous research initiatives in machine learning and metaheuristic optimization. We unify and extend earlier proposals in metaheuristic search (Glover, 1997, Glover and Laguna, 1997) to give a collection of approaches that are more flexible and comprehensive than OBL for creating intensification and diversification strategies in metaheuristic search. We also describe potential applications of DBL to various subfields of machine learning and optimization.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Mathematics" ]
Title: Towards Neural Co-Processors for the Brain: Combining Decoding and Encoding in Brain-Computer Interfaces, Abstract: The field of brain-computer interfaces is poised to advance from the traditional goal of controlling prosthetic devices using brain signals to combining neural decoding and encoding within a single neuroprosthetic device. Such a device acts as a "co-processor" for the brain, with applications ranging from inducing Hebbian plasticity for rehabilitation after brain injury to reanimating paralyzed limbs and enhancing memory. We review recent progress in simultaneous decoding and encoding for closed-loop control and plasticity induction. To address the challenge of multi-channel decoding and encoding, we introduce a unifying framework for developing brain co-processors based on artificial neural networks and deep learning. These "neural co-processors" can be used to jointly optimize cost functions with the nervous system to achieve desired behaviors ranging from targeted neuro-rehabilitation to augmentation of brain function.
[ 0, 0, 0, 0, 1, 0 ]
[ "Computer Science", "Quantitative Biology" ]
Title: Enhanced conservation properties of Vlasov codes through coupling with conservative fluid models, Abstract: Many phenomena in collisionless plasma physics require a kinetic description. The evolution of the phase space density can be modeled by means of the Vlasov equation, which has to be solved numerically in most of the relevant cases. One of the problems that often arise in such simulations is the violation of important physical conservation laws. Numerical diffusion in phase space translates into unphysical heating, which can increase the overall energy significantly, depending on the time scale and the plasma regime. In this paper, a general and straightforward way of improving conservation properties of Vlasov schemes is presented that can potentially be applied to a variety of different codes. The basic idea is to use fluid models with good conservation properties for correcting kinetic models. The higher moments that are missing in the fluid models are provided by the kinetic codes, so that both kinetic and fluid codes compensate the weaknesses of each other in a closed feedback loop.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics", "Mathematics", "Computer Science" ]
Title: Visual Integration of Data and Model Space in Ensemble Learning, Abstract: Ensembles of classifier models typically deliver superior performance and can outperform single classifier models given a dataset and classification task at hand. However, the gain in performance comes together with the lack in comprehensibility, posing a challenge to understand how each model affects the classification outputs and where the errors come from. We propose a tight visual integration of the data and the model space for exploring and combining classifier models. We introduce a workflow that builds upon the visual integration and enables the effective exploration of classification outputs and models. We then present a use case in which we start with an ensemble automatically selected by a standard ensemble selection algorithm, and show how we can manipulate models and alternative combinations.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: A Unified Strouhal-Reynolds Number Relationship for Laminar Vortex Streets Generated by Different Shaped Obstacles, Abstract: A new Strouhal-Reynolds number relationship, $St=1/(A+B/Re)$, has been recently proposed based on observations of laminar vortex shedding from circular cylinders in a flowing soap film. Since the new $St$-$Re$ relation was derived from a general physical consideration, it raises the possibility that it may be applicable to vortex shedding from bodies other than circular ones. The work presented herein provides experimental evidence that this is the case. Our measurements also show that in the asymptotic limit ($Re\rightarrow\infty$), $St_{\infty}=1/A\simeq0.21$ is constant independent of rod shapes, leaving $B$ the only parameter that is shape dependent.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Looking backward: From Euler to Riemann, Abstract: We survey the main ideas in the early history of the subjects on which Riemann worked and that led to some of his most important discoveries. The subjects discussed include the theory of functions of a complex variable, elliptic and Abelian integrals, the hypergeometric series, the zeta function, topology, differential geometry, integration, and the notion of space. We shall see that among Riemann's predecessors in all these fields, one name occupies a prominent place, this is Leonhard Euler. The final version of this paper will appear in the book \emph{From Riemann to differential geometry and relativity} (L. Ji, A. Papadopoulos and S. Yamada, ed.) Berlin: Springer, 2017.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Algorithmic Performance-Accuracy Trade-off in 3D Vision Applications Using HyperMapper, Abstract: In this paper we investigate an emerging application, 3D scene understanding, likely to be significant in the mobile space in the near future. The goal of this exploration is to reduce execution time while meeting our quality of result objectives. In previous work we showed for the first time that it is possible to map this application to power constrained embedded systems, highlighting that decision choices made at the algorithmic design-level have the most impact. As the algorithmic design space is too large to be exhaustively evaluated, we use a previously introduced multi-objective Random Forest Active Learning prediction framework dubbed HyperMapper, to find good algorithmic designs. We show that HyperMapper generalizes on a recent cutting edge 3D scene understanding algorithm and on a modern GPU-based computer architecture. HyperMapper is able to beat an expert human hand-tuning the algorithmic parameters of the class of Computer Vision applications taken under consideration in this paper automatically. In addition, we use crowd-sourcing using a 3D scene understanding Android app to show that the Pareto front obtained on an embedded system can be used to accelerate the same application on all the 83 smart-phones and tablets crowd-sourced with speedups ranging from 2 to over 12.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: A uniform bound on the Brauer groups of certain log K3 surfaces, Abstract: Let U be the complement of a smooth anticanonical divisor in a del Pezzo surface of degree at most 7 over a number field k. We show that there is an effective uniform bound for the size of the Brauer group of U in terms of the degree of k.
[ 0, 0, 1, 0, 0, 0 ]
[ "Mathematics" ]
Title: Investigation of Using VAE for i-Vector Speaker Verification, Abstract: New system for i-vector speaker recognition based on variational autoencoder (VAE) is investigated. VAE is a promising approach for developing accurate deep nonlinear generative models of complex data. Experiments show that VAE provides speaker embedding and can be effectively trained in an unsupervised manner. LLR estimate for VAE is developed. Experiments on NIST SRE 2010 data demonstrate its correctness. Additionally, we show that the performance of VAE-based system in the i-vectors space is close to that of the diagonal PLDA. Several interesting results are also observed in the experiments with $\beta$-VAE. In particular, we found that for $\beta\ll 1$, VAE can be trained to capture the features of complex input data distributions in an effective way, which is hard to obtain in the standard VAE ($\beta=1$).
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Learning to Use Learners' Advice, Abstract: In this paper, we study a variant of the framework of online learning using expert advice with limited/bandit feedback. We consider each expert as a learning entity, seeking to more accurately reflecting certain real-world applications. In our setting, the feedback at any time $t$ is limited in a sense that it is only available to the expert $i^t$ that has been selected by the central algorithm (forecaster), \emph{i.e.}, only the expert $i^t$ receives feedback from the environment and gets to learn at time $t$. We consider a generic black-box approach whereby the forecaster does not control or know the learning dynamics of the experts apart from knowing the following no-regret learning property: the average regret of any expert $j$ vanishes at a rate of at least $O(t_j^{\regretRate-1})$ with $t_j$ learning steps where $\regretRate \in [0, 1]$ is a parameter. In the spirit of competing against the best action in hindsight in multi-armed bandits problem, our goal here is to be competitive w.r.t. the cumulative losses the algorithm could receive by following the policy of always selecting one expert. We prove the following hardness result: without any coordination between the forecaster and the experts, it is impossible to design a forecaster achieving no-regret guarantees. In order to circumvent this hardness result, we consider a practical assumption allowing the forecaster to "guide" the learning process of the experts by filtering/blocking some of the feedbacks observed by them from the environment, \emph{i.e.}, not allowing the selected expert $i^t$ to learn at time $t$ for some time steps. Then, we design a novel no-regret learning algorithm \algo for this problem setting by carefully guiding the feedbacks observed by experts. We prove that \algo achieves the worst-case expected cumulative regret of $O(\Time^\frac{1}{2 - \regretRate})$ after $\Time$ time steps.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Microplasma generation by slow microwave in an electromagnetically induced transparency-like metasurface, Abstract: Microplasma generation using microwaves in an electromagnetically induced transparency (EIT)-like metasurface composed of two types of radiatively coupled cut-wire resonators with slightly different resonance frequencies is investigated. Microplasma is generated in either of the gaps of the cut-wire resonators as a result of strong enhancement of the local electric field associated with resonance and slow microwave effect. The threshold microwave power for plasma ignition is found to reach a minimum at the EIT-like transmission peak frequency, where the group index is maximized. A pump-probe measurement of the metasurface reveals that the transmission properties can be significantly varied by varying the properties of the generated microplasma near the EIT-like transmission peak frequency and the resonance frequency. The electron density of the microplasma is roughly estimated to be of order $1\times 10^{10}\,\mathrm{cm}^{-3}$ for a pump power of $15.8\,\mathrm{W}$ by comparing the measured transmission spectrum for the probe wave with the numerically calculated spectrum. In the calculation, we assumed that the plasma is uniformly generated in the resonator gap, that the electron temperature is $2\,\mathrm{eV}$, and that the elastic scattering cross section is $20 \times 10^{-16}\,\mathrm{cm}^2$.
[ 0, 1, 0, 0, 0, 0 ]
[ "Physics" ]
Title: Scalable Twin Neural Networks for Classification of Unbalanced Data, Abstract: Twin Support Vector Machines (TWSVMs) have emerged an efficient alternative to Support Vector Machines (SVM) for learning from imbalanced datasets. The TWSVM learns two non-parallel classifying hyperplanes by solving a couple of smaller sized problems. However, it is unsuitable for large datasets, as it involves matrix operations. In this paper, we discuss a Twin Neural Network (Twin NN) architecture for learning from large unbalanced datasets. The Twin NN also learns an optimal feature map, allowing for better discrimination between classes. We also present an extension of this network architecture for multiclass datasets. Results presented in the paper demonstrate that the Twin NN generalizes well and scales well on large unbalanced datasets.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science", "Statistics" ]
Title: Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition, Abstract: Recurrent Neural Networks (RNNs) are powerful sequence modeling tools. However, when dealing with high dimensional inputs, the training of RNNs becomes computational expensive due to the large number of model parameters. This hinders RNNs from solving many important computer vision tasks, such as Action Recognition in Videos and Image Captioning. To overcome this problem, we propose a compact and flexible structure, namely Block-Term tensor decomposition, which greatly reduces the parameters of RNNs and improves their training efficiency. Compared with alternative low-rank approximations, such as tensor-train RNN (TT-RNN), our method, Block-Term RNN (BT-RNN), is not only more concise (when using the same rank), but also able to attain a better approximation to the original RNNs with much fewer parameters. On three challenging tasks, including Action Recognition in Videos, Image Captioning and Image Generation, BT-RNN outperforms TT-RNN and the standard RNN in terms of both prediction accuracy and convergence rate. Specifically, BT-LSTM utilizes 17,388 times fewer parameters than the standard LSTM to achieve an accuracy improvement over 15.6\% in the Action Recognition task on the UCF11 dataset.
[ 1, 0, 0, 1, 0, 0 ]
[ "Computer Science" ]
Title: On convergence of the sample correlation matrices in high-dimensional data, Abstract: In this paper, we consider an estimation problem concerning the matrix of correlation coefficients in context of high dimensional data settings. In particular, we revisit some results in Li and Rolsalsky [Li, D. and Rolsalsky, A. (2006). Some strong limit theorems for the largest entries of sample correlation matrices, The Annals of Applied Probability, 16, 1, 423-447]. Four of the main theorems of Li and Rolsalsky (2006) are established in their full generalities and we simplify substantially some proofs of the quoted paper. Further, we generalize a theorem which is useful in deriving the existence of the pth moment as well as in studying the convergence rates in law of large numbers.
[ 0, 0, 1, 1, 0, 0 ]
[ "Mathematics", "Statistics" ]
Title: Program Synthesis from Visual Specification, Abstract: Program synthesis is the process of automatically translating a specification into computer code. Traditional synthesis settings require a formal, precise specification. Motivated by computer education applications where a student learns to code simple turtle-style drawing programs, we study a novel synthesis setting where only a noisy user-intention drawing is specified. This allows students to sketch their intended output, optionally together with their own incomplete program, to automatically produce a completed program. We formulate this synthesis problem as search in the space of programs, with the score of a state being the Hausdorff distance between the program output and the user drawing. We compare several search algorithms on a corpus consisting of real user drawings and the corresponding programs, and demonstrate that our algorithms can synthesize programs optimally satisfying the specification.
[ 1, 0, 0, 0, 0, 0 ]
[ "Computer Science" ]
Title: Gaussian process regression for forest attribute estimation from airborne laser scanning data, Abstract: While the analysis of airborne laser scanning (ALS) data often provides reliable estimates for certain forest stand attributes -- such as total volume or basal area -- there is still room for improvement, especially in estimating species-specific attributes. Moreover, while information on the estimate uncertainty would be useful in various economic and environmental analyses on forests, a computationally feasible framework for uncertainty quantifying in ALS is still missing. In this article, the species-specific stand attribute estimation and uncertainty quantification (UQ) is approached using Gaussian process regression (GPR), which is a nonlinear and nonparametric machine learning method. Multiple species-specific stand attributes are estimated simultaneously: tree height, stem diameter, stem number, basal area, and stem volume. The cross-validation results show that GPR yields on average an improvement of 4.6\% in estimate RMSE over a state-of-the-art k-nearest neighbors (kNN) implementation, negligible bias and well performing UQ (credible intervals), while being computationally fast. The performance advantage over kNN and the feasibility of credible intervals persists even when smaller training sets are used.
[ 0, 0, 0, 1, 0, 0 ]
[ "Computer Science", "Statistics" ]