text
stringlengths 133
1.92k
| summary
stringlengths 24
228
|
---|---|
Let G be a unipotent algebraic subgroup of some GL_m(C) defined over Q. We describe an algorithm for finding a finite set of generators of the subgroup G(Z) = G \cap GL_m(Z). This is based on a new proof of the result (in more general form due to Borel and Harish-Chandra) that such a finite generating set exists. | Constructing arithmetic subgroups of unipotent groups |
The vibrational motion equations of both homo and hetero-nuclei diatomic molecules are here derived for the first time. A diatomic molecule is first considered as a one dimensional quantum mechanics oscillator. The second and third-order Hamiltonian operators are then formed by substituting the number operator for the quantum number in the corresponding vibrational energy eigenvalues. The expectation values of relative position and linear momentum operators of two oscillating atoms are calculated by solving Heisenbergs equations of motion. Subsequently, the expectation values of potential and kinetics energy operators are evaluated in all different vibrational levels of Morse potential. On the other hand, the stability theory of optical oscillators (lasers) is exploited to determine the stability conditions of an oscillating diatomic molecule.It is peculiarly turned out that the diatomic molecules are exactly dissociated at the energy level in which their equations of motion become unstable. We also determine the minimum oscillation frequency (cut-off frequency) of a diatomic molecule at the dissociation level of Morse potential. At the end, the energy conservation is illustrated for the vibrational motion of a diatomic molecule. | Stability conditions of diatomic molecules in Heisenbergs picture: inspired from the stability theory of lasers |
Neural networks have opened up many new opportunities to utilize remotely sensed images in meteorology. Common applications include image classification, e.g., to determine whether an image contains a tropical cyclone, and image translation, e.g., to emulate radar imagery for satellites that only have passive channels. However, there are yet many open questions regarding the use of neural networks in meteorology, such as best practices for evaluation, tuning and interpretation. This article highlights several strategies and practical considerations for neural network development that have not yet received much attention in the meteorological community, such as the concept of effective receptive fields, underutilized meteorological performance measures, and methods for NN interpretation, such as synthetic experiments and layer-wise relevance propagation. We also consider the process of neural network interpretation as a whole, recognizing it as an iterative scientist-driven discovery process, and breaking it down into individual steps that researchers can take. Finally, while most work on neural network interpretation in meteorology has so far focused on networks for image classification tasks, we expand the focus to also include networks for image translation. | Evaluation, Tuning and Interpretation of Neural Networks for Meteorological Applications |
Binary neutron star (NSNS) mergers can be sources of gravitational waves coincident with electromagnetic counterpart emission. To solidify their role as multimessenger sources, we present fully 3D, general relativistic, magnetohydrodynamic simulations of spinning NSNSs initially on quasicircular orbits that merge and undergo delayed collapse to a black hole (BH). The NSNSs consist of two identical stars modeled as $\Gamma=2$ polytropes with spin $\chi_{NS}= 0.36$ aligned along the direction of the total orbital angular momentum $L$. Each star is initially threaded by a dynamical unimportant interior dipole B-field. The field is extended into the exterior where a nearly force-free magnetosphere resembles that of a pulsar. The magnetic dipole moment $\mu$ is either aligned or perpendicular to $L$ and has the same initial magnitude for each orientation. For comparison, we also impose symmetry across the orbital plane in one case where $\mu$ in both stars is aligned along $L$. We find that the lifetime of the transient hypermassive neutron star remnant, the jet launching time, and the ejecta are very sensitive to the B-field orientation. By contrast, the physical properties of the BH + disk remnant, such as the mass and spin of the BH, the accretion rate, and the electromagnetic luminosity, are roughly independent of the initial B-field orientation. In addition, we find imposing symmetry across the orbital plane does not play a significant role in the final outcome of the mergers. Our results show that an incipient jet emerges only when the seed B-field has a sufficiently large-scale poloidal component aligned to $L$. The lifetime [$\Delta t\gtrsim 140(M_{NS}/1.625M_\odot)\rm ms$] and Poynting luminosities [$L_{EM}\simeq 10^{52}$erg/s] of the jet, when it forms, are consistent with typical short gamma-ray bursts, as well as with the Blandford--Znajek mechanism for launching jets. | Magnetohydrodynamic Simulations of Binary Neutron Star Mergers in General Relativity: Effects of Magnetic Field Orientation on Jet Launching |
White Matter Hyperintensity (WMH) is an imaging feature related to various diseases such as dementia and stroke. Accurately segmenting WMH using computer technology is crucial for early disease diagnosis. However, this task remains challenging due to the small lesions with low contrast and high discontinuity in the images, which contain limited contextual and spatial information. To address this challenge, we propose a deep learning model called 3D Spatial Attention U-Net (3D SA-UNet) for automatic WMH segmentation using only Fluid Attenuation Inversion Recovery (FLAIR) scans. The 3D SA-UNet introduces a 3D Spatial Attention Module that highlights important lesion features, such as WMH, while suppressing unimportant regions. Additionally, to capture features at different scales, we extend the Atrous Spatial Pyramid Pooling (ASPP) module to a 3D version, enhancing the segmentation performance of the network. We evaluate our method on publicly available dataset and demonstrate the effectiveness of 3D spatial attention module and 3D ASPP in WMH segmentation. Through experimental results, it has been demonstrated that our proposed 3D SA-UNet model achieves higher accuracy compared to other state-of-the-art 3D convolutional neural networks. | 3D SA-UNet: 3D Spatial Attention UNet with 3D ASPP for White Matter Hyperintensities Segmentation |
Continuous-time random walks offer powerful coarse-grained descriptions of transport processes. We here microscopically derive such a model for a Brownian particle diffusing in a deep periodic potential. We determine both the waiting-time and the jump-length distributions in terms of the parameters of the system, from which we analytically deduce the non-Gaussian characteristic function. We apply this continuous-time random walk model to characterize the underdamped diffusion of single Cesium atoms in a one-dimensional optical lattice. We observe excellent agreement between experimental and theoretical characteristic functions, without any free parameter. | Continuous-time random walk for a particle in a periodic potential |
The problem of decentralized multi-robot target tracking asks for jointly selecting actions, e.g., motion primitives, for the robots to maximize target tracking performance with local communications. One major challenge for practical implementations is to make target tracking approaches scalable for large-scale problem instances. In this work, we propose a general-purpose learning architecture toward collaborative target tracking at scale, with decentralized communications. Particularly, our learning architecture leverages a graph neural network (GNN) to capture local interactions of the robots and learns decentralized decision-making for the robots. We train the learning model by imitating an expert solution and implement the resulting model for decentralized action selection involving local observations and communications only. We demonstrate the performance of our GNN-based learning approach in a scenario of active target tracking with large networks of robots. The simulation results show our approach nearly matches the tracking performance of the expert algorithm, and yet runs several orders faster with up to 100 robots. Moreover, it slightly outperforms a decentralized greedy algorithm but runs faster (especially with more than 20 robots). The results also exhibit our approach's generalization capability in previously unseen scenarios, e.g., larger environments and larger networks of robots. | Graph Neural Networks for Decentralized Multi-Robot Submodular Action Selection |
We give canonical resolutions of singularities of several cone varieties arising from invariant theory. We establish a connection between our resolutions and resolutions of singularities of closure of conjugacy classes in classical Lie algebras. | Resolution of singularities of null cones |
This is an opinion paper about the strengths and weaknesses of Deep Nets for vision. They are at the heart of the enormous recent progress in artificial intelligence and are of growing importance in cognitive science and neuroscience. They have had many successes but also have several limitations and there is limited understanding of their inner workings. At present Deep Nets perform very well on specific visual tasks with benchmark datasets but they are much less general purpose, flexible, and adaptive than the human visual system. We argue that Deep Nets in their current form are unlikely to be able to overcome the fundamental problem of computer vision, namely how to deal with the combinatorial explosion, caused by the enormous complexity of natural images, and obtain the rich understanding of visual scenes that the human visual achieves. We argue that this combinatorial explosion takes us into a regime where "big data is not enough" and where we need to rethink our methods for benchmarking performance and evaluating vision algorithms. We stress that, as vision algorithms are increasingly used in real world applications, that performance evaluation is not merely an academic exercise but has important consequences in the real world. It is impractical to review the entire Deep Net literature so we restrict ourselves to a limited range of topics and references which are intended as entry points into the literature. The views expressed in this paper are our own and do not necessarily represent those of anybody else in the computer vision community. | Deep Nets: What have they ever done for Vision? |
Some peculiarities of the exploitation of the entropy inequality in case of weakly nonlocal continuum theories are investigated and refined. As an example it is shown that the proper application of the Liu procedure leads to the Ginzburg-Landau equation in case of a weakly nonlocal extension of the constitutive space of the simplest internal variable theories. | Weakly nonlocal continuum physics - the Ginzburg-Landau equation |
Direct detection experiments turn to lose sensitivity of searching for a sub-MeV light dark matter candidate due to the threshold of recoil energy. However, such light dark matter particles can be accelerated by energetic cosmic-rays such that they can be detected with existing detectors. We derive the constraints on the scattering of a boosted light dark matter and electron from the XENON100/1T experiment. We illustrate that the energy dependence of the cross section plays a crucial role in improving both the detection sensitivity and also the complementarity of direct detection and other experiments. | Exploring for sub-MeV Boosted Dark Matter from Xenon Electron Direct Detection |
In most clinical trials, patients are randomized with equal probability among treatments to obtain an unbiased estimate of the treatment effect. Response-adaptive randomization (RAR) has been proposed for ethical reasons, where the randomization ratio is tilted successively to favor the better performing treatment. However, the substantial disagreement regarding bias due to time-trends in adaptive randomization is not fully recognized. The type-I error is inflated in the traditional Bayesian RAR approaches when a time-trend is present. In our approach, patients are assigned in blocks and the randomization ratio is recomputed for blocks rather than traditional adaptive randomization where it is done per patient. We further investigate the design with a range of scenarios for both frequentist and Bayesian designs. We compare our method with equal randomization and with different numbers of blocks including the traditional RAR design where randomization ratio is altered patient by patient basis. The analysis is stratified if there are two or more patients in each block. Small blocks should be avoided due to the possibility of not acquiring any information from the $\mu_i$. On the other hand, RAR with large blocks has a good balance between efficiency and treating more subjects to the better-performing treatment, while retaining blocked RAR's unique unbiasedness. | Robust Blocked Response-Adaptive Randomization Designs |
We $q$-enumerate lozenge tilings of a hexagon with three bowtie-shaped regions have been removed from three non-consecutive sides. The unweighted version of the result generalizes a problem posed by James Propp on enumeration of lozenge tilings of a hexagon of side-lengths $2n,2n+3,2n,2n+3,2n,2n+3$ (in cyclic order) with the central unit triangles on the $(2n+3)$-sides removed. | A $q$-enumeration of lozenge tilings of a hexagon with three dents |
We introduce a formalism to describe 2D-Potentials for 2D-matter (or charge) distributions with arbitrary elliptical symmetry including varying eccentricity and twisting of the iso-density curves. We use this approach to describe elliptical matter distributions such as elliptical galaxies or clusters as gravitational lenses. Figures are available upon request: [email protected] | A toolbox for general elliptical gravitational lenses |
We propose a new cyclic proof system for automated, equational reasoning about the behaviour of pure functional programs. The key to the system is the way in which cyclic proof and equational reasoning are mediated by the use of contextual substitution as a cut rule. We show that our system, although simple, already subsumes several of the approaches to implicit induction variously known as "inductionless induction", "rewriting induction", and "proof by consistency". By restricting the form of the traces, we show that global correctness in our system can be verified incrementally, taking advantage of the well-known size-change principle, which leads to an efficient implementation of proof search. Our CycleQ tool, accessible as a GHC plugin, shows promising results on a number of standard benchmarks. | CycleQ: An Efficient Basis for Cyclic Equational Reasoning |
The magnetization field and temperature dependences in the paramagnetic phase of Mn1-xFexSi solid solutions with x<0.3 are investigated in the range B<5 T and T<60 K. It is found that field dependences of the magnetization M(B,T=const) exhibit scaling behavior of the form B\partial M/\partial B-M=F(B/(T-Ts)), where Ts denotes an empirically determined temperature of the transition into the magnetic phase with fluctuation driven short-range magnetic order and F(\c{hi}) is a universal scaling function for given composition. The scaling relation allowed concluding that the magnetization in the paramagnetic phase of Mn1-xFexSi is represented by the sum of two terms. The first term is saturated by the scaling variable \c{hi}=B/(T-Ts), whereas the second is linearly dependent on the magnetic field. A simple analytical formula describing the magnetization is derived and applied to estimates of the parameters characterizing localized magnetic moments in the studied system. The obtained data may be qualitatively interpreted assuming magnetic inhomogeneity of the paramagnetic phase on the nanoscale. | Magnetization scaling in the paramagnetic phase of Mn1-xFexSi solid solutions |
For every prime $p > 2$ we exhibit a Cayley graph of $\mathbb{Z}_p^{2p+3}$ which is not a CI-graph. This proves that an elementary Abelian $p$-group of rank greater than or equal to $2p+3$ is not a CI-group. The proof is elementary and uses only multivariate polynomials and basic tools of linear algebra. Moreover, we apply our technique to give a uniform explanation for the recent works concerning the bound. | Elementary Abelian p-groups of rank 2p+3 are not CI-groups |
We investigate the consequence of vector leptoquarks on the rare semileptonic lepton flavour violating decays of $B$ meson which are more promising and effective channels to probe the new physics signal. We constrain the resulting new leptoquark parameter space by using the branching ratios of $B_{s, d} \to l^+ l^-$, $K_L \to l^+ l^-$ and $\tau^- \to l^- \gamma$ processes. We estimate the branching ratios of rare lepton flavour violating $B \to K(\pi)l_i^- l_j^+$ processes using the constrained leptoquark couplings. We also compute the forward-backward asymmetries and the lepton non-universality parameters of the LFV decays in the vector leptoquark model. Furthermore, we study the effect of vector leptoquark on $(g-2)_\mu$ anomaly. | Rare semileptonic $B \to K(\pi)l_i^- l_j^+$ decay in vector leptoquark model |
Study of the neutrinoless double beta decay, $0\nu\beta\beta$, includes a variety of problems of nuclear structure theory. They are reviewed here. The problems range from the mechanism of the decay, i.e. exchange of the light Majorana neutrino neutrino versus the exchange of some heavy, so far unobserved particle. Next, the proper expressions for the corresponding operator are described that should include the effects of the nucleon size and of the recoil order terms in the hadronic current. The issue of proper treatment of the short range correlations, in particular for the case of the heavy particle exchange, is discussed also. The variety of methods employed these days in the theoretical evaluation of the nuclear matrix elements $M^{0\nu}$ is briefly described and the difficulties causing the spread and hence uncertainty in the values of $M^{0\nu}$ are discussed. Finally, the issue of the axial current quenching, and of the resonance enhancement in the case of double electron capture are described. | Nuclear structure and double beta decay |
This paper is concerned with a singular flux-function limit of the Riemann solutions to a deposition model. As a result, it is shown that the Riemann solutions to the deposition model just converge to the corresponding Riemann solutions to the limit system, which is one of typical models admitting delta-shocks. Especially, the phenomenon of concentration and the formation of delta-shocks in the limit are analyzed in detail, and the process of concentration is numerically simulated. | Concentration in flux-function limits of solutions to a deposition model |
This study proposes an efficient algorithm for score computation for regime-switching models, and derived from which, an efficient expectation-maximization (EM) algorithm. Different from existing algorithms, this algorithm does not rely on the forward-backward filtering for smoothed regime probabilities, and only involves forward computation. Moreover, the algorithm to compute score is readily extended to compute the Hessian matrix. | Efficient Score Computation and Expectation-Maximization Algorithm in Regime-Switching Models |
In this paper, the major upgrades and technical improvements of the buffer gas handling system for the cryogenic stopping cell of the FRS Ion Catcher at GSI/FAIR (in Darmstadt, Germany) are described. The upgrades include implementation of new gas lines and gas purifiers to achieve a higher buffer gas cleanliness for a more efficient extraction of reactive ions as well as suppression of the molecular background ionized in the stopping cell. Furthermore, additional techniques have been implemented for improved monitoring and quantification of the purity of the helium buffer gas. | Recent Upgrades of the Gas Handling System for the Cryogenic Stopping Cell of the FRS Ion Catcher |
In a tight-binding model of AA-stacked bilayer graphene, it is demonstrated that a bound defect state within the region of continuous spectrum can exist stably with respect to variations in the strength of a perpendicular magnetic field. This is accomplished by creating a defect that is compatible with the interlayer coupling, thereby shielding the bound state from the effects of the continuous spectrum, which varies erratically in a pattern known as the Hofstadter butterfly. | Stable defect states in the continuous spectrum of bilayer graphene with magnetic field |
We introduce a setup of model uncertainty in discrete time. In this setup we derive dual expressions for the super--replication prices of game options with upper semicontinuous payoffs. We show that the super--replication price is equal to the supremum over a special (non dominated) set of martingale measures, of the corresponding Dynkin games values. This type of results is also new for American options. | Hedging of Game Options under Model Uncertainty in Discrete Time |
Symmetric independence relations are often studied using graphical representations. Ancestral graphs or acyclic directed mixed graphs with $m$-separation provide classes of symmetric graphical independence models that are closed under marginalization. Asymmetric independence relations appear naturally for multivariate stochastic processes, for instance in terms of local independence. However, no class of graphs representing such asymmetric independence relations, which is also closed under marginalization, has been developed. We develop the theory of directed mixed graphs with $\mu$-separation and show that this provides a graphical independence model class which is closed under marginalization and which generalizes previously considered graphical representations of local independence. For statistical applications, it is pivotal to characterize graphs that induce the same independence relations as such a Markov equivalence class of graphs is the object that is ultimately identifiable from observational data. Our main result is that for directed mixed graphs with $\mu$-separation each Markov equivalence class contains a maximal element which can be constructed from the independence relations alone. Moreover, we introduce the directed mixed equivalence graph as the maximal graph with edge markings. This graph encodes all the information about the edges that is identifiable from the independence relations, and furthermore it can be computed efficiently from the maximal graph. | Markov equivalence of marginalized local independence graphs |
Spatial Mobile Crowdsourcing (SMCS) can be leveraged by exploiting the capabilities of the Social Internet-of-Things (SIoT) to execute spatial tasks. Typically, in SMCS, a task requester aims to recruit a subset of IoT devices and commission them to travel to the task location. However, because of the exponential increase of IoT networks and their diversified devices (e.g., multiple brands, different communication channels, etc.), recruiting the appropriate devices/workers is becoming a challenging task. To this end, in this paper, we develop a recruitment process for SMCS platforms using automated SIoT service discovery to select trustworthy workers satisfying the requester requirements. The method we purpose includes mainly two stages: 1) a worker filtering stage, aiming at reducing the workers' search space to a subset of potential trustworthy candidates using the Louvain community detection algorithm (CD) applied to SIoT relation graphs. Next, 2) a selection process stage that uses an Integer Linear Program (ILP) to determine the final set of selected devices/workers. The ILP maximizes a worker efficiency metric incorporating the skills/specs level, recruitment cost, and trustworthiness level of the recruited IoT devices. Selected experiments analyze the performance of the proposed CD-ILP algorithm using a real-world dataset and show its superiority in providing an effective recruitment strategy compared to an existing stochastic algorithm. | A Trustworthy Recruitment Process for Spatial Mobile Crowdsourcing in Large-scale Social IoT |
We show that the vibrations of a nanomechanical resonator can be cooled to near its quantum ground state by tunnelling injection of electrons from an STM tip. The interplay between two mechanisms for coupling the electronic and mechanical degrees of freedom results in a bias-voltage dependent difference between the probability amplitudes for vibron emission and absorption during tunneling. For a bias voltage just below the Coulomb blockade threshold we find that absorption dominates, which leads to cooling corresponding to an average vibron population of the fundamental bending mode of 0.2. | Cooling of nanomechanical resonator by thermally activated single-electron transport |
The properties of biological microswimmers are to a large extent determined by fluid-mediated interactions, which govern their propulsion, perception of their surrounding, and the steering of their motion for feeding or in pursuit. Transferring similar functionalities to synthetic microswimmers poses major challenges, and the design of favorable steering and pursuit strategies is fundamental in such an endeavor. Here, we apply a squirmer model to investigate the pursuit of pursuer-target pairs with an implicit sensing mechanism and limited hydrodynamic steering abilities of the pursuer. Two hydrodynamic steering strategies are applied for the pursuer's propulsion direction by adaptation of its surface flow field, (i) reorientation toward the target with limited maneuverability, and (ii) alignment with the target's propulsion direction combined with speed adaptation. Depending on the nature of the microswimmer propulsion (puller, pusher) and the velocity-adaptation scheme, stable cooperatively moving states can be achieved, characterized by specific squirmer arrangements and controllable trajectories. Importantly, pursuer and target mutually affect their motion and trajectories. | Hydrodynamic pursuit by cognitive self-steering microswimmers |
Core-level spectra of liquids can be difficult to interpret due to the presence of a range of local environments. We present computational methods for investigating core-level spectra based on the idea that both local structural parameters and the X-ray spectra behave as functions of the local atomic configuration around the absorbing site. We identify correlations between structural parameters and spectral intensities in defined regions of interest, using the oxygen K-edge excitation spectrum of liquid water as a test case. Our results show that this kind of analysis can find the main structure-spectral relationships of ice, liquid water, and supercritical water. | Disentangling Structural Information From Core-level Excitation Spectra |
We prove that a finite-dimensional cocommutative Hopf algebra $H$ is local, if and only if the subalgebra generated by the first term of its coradical filtration $H_1$ is local. In particular if $H$ is connected, $H$ is local if and only if all the primitive elements of $H$ are nilpotent. | Local criteria for cocommutative Hopf algebras |
Construct, Merge, Solve and Adapt (CMSA) is a general hybrid metaheuristic for solving combinatorial optimization problems. At each iteration, CMSA (1) constructs feasible solutions to the tackled problem instance in a probabilistic way and (2) solves a reduced problem instance (if possible) to optimality. The construction of feasible solutions is hereby problem-specific, usually involving a fast greedy heuristic. The goal of this paper is to design a problem-agnostic CMSA variant whose exclusive input is an integer linear program (ILP). In order to reduce the complexity of this task, the current study is restricted to binary ILPs. In addition to a basic problem-agnostic CMSA variant, we also present an extended version that makes use of a constraint propagation engine for constructing solutions. The results show that our technique is able to match the upper bounds of the standalone application of CPLEX in the context of rather easy-to-solve instances, while it generally outperforms the standalone application of CPLEX in the context of hard instances. Moreover, the results indicate that the support of the constraint propagation engine is useful in the context of problems for which finding feasible solutions is rather difficult. | Generic CP-Supported CMSA for Binary Integer Linear Programs |
I construct regulator indecomposable higher Chow cycles in elliptic surfaces satisfying certain conditions. As an application I give an alternative proof of a theorem of Gordon and Lewis, which asserts that there is a real regulator indecomposable cycles in a product of general elliptic curves. | A simple construction of indecomposable higher Chow cycles in elliptic surfaces |
We report new experimental hyperfine structure (HFS) constants of neutral and singly ionized scandium (Sc I and Sc II). We observed spectra of Sc-Ar and Sc-Ne hollow cathode discharges in the region 200-2500 nm (50,000-4000 cm$^{-1}$) using Fourier transform spectrometers. The measurements show significant HFS patterns in 1431 spectral lines fitted in our 12 spectra given in Table 1. These were fitted using the computer package Xgremlin to determine the magnetic dipole hyperfine interaction constant (A) for 185 levels in Sc i and 6 levels in Sc II, of which 80 Sc I levels had no previous measurements. The uncertainty in the HFS A constant is between 1 $\times$ 10$^{-4}$ and 5 $\times$ 10$^{-4}$ cm$^{-1}$. | Hyperfine Structure Constants of Sc I and Sc II with Fourier Transform Spectroscopy |
The capacity of caching networks has received considerable attention in the past few years. A particularly studied setting is the shared link caching network, in which a single source with access to a file library communicates with multiple users, each having the capability to store segments (packets) of the library files, over a shared multicast link. Each user requests one file from the library according to a common demand distribution and the server sends a coded multicast message to satisfy all users at once. The problem consists of finding the smallest possible average codeword length to satisfy such requests. In this paper, we consider the generalization to the case where each user places L >= 1 independent requests according to the same common demand distribution. We propose an achievable scheme based on random vector (packetized) caching placement and multiple groupcast index coding, shown to be order-optimal in the asymptotic regime in which the number of packets per file B goes to infinity. We then show that the scalar (B = 1) version of the proposed scheme can still preserve order-optimality when the number of per-user requests L is large enough. Our results provide the first order-optimal characterization of the shared link caching network with multiple random requests, revealing the key effects of L on the performance of caching-aided coded multicast schemes. | Caching-Aided Coded Multicasting with Multiple Random Requests |
In the absence of an external magnetic field and a spin-polarized charge current, an antiferromagnetic system supports two degenerate magnon modes. An applied thermal bias activates the magnetic dynamics, leading to a magnon flow from the hot to the cold edge (magnonic spin Seebeck current). Both degenerate bands contribute to the magnon current but the orientations of the magnetic moments underlying the magnons are opposite in different bands. Therefore, while the magnon current is nonzero, the net spin current is zero. | Rectification of the spin Seebeck current in noncollinear antiferromagnets |
Recent progress in contrastive learning has revolutionized unsupervised representation learning. Concretely, multiple views (augmentations) from the same image are encouraged to map to the similar embeddings, while views from different images are pulled apart. In this paper, through visualizing and diagnosing classification errors, we observe that current contrastive models are ineffective at localizing the foreground object, limiting their ability to extract discriminative high-level features. This is due to the fact that view generation process considers pixels in an image uniformly. To address this problem, we propose a data-driven approach for learning invariance to backgrounds. It first estimates foreground saliency in images and then creates augmentations by copy-and-pasting the foreground onto a variety of backgrounds. The learning still follows the instance discrimination pretext task, so that the representation is trained to disregard background content and focus on the foreground. We study a variety of saliency estimation methods, and find that most methods lead to improvements for contrastive learning. With this approach (DiLo), significant performance is achieved for self-supervised learning on ImageNet classification, and also for object detection on PASCAL VOC and MSCOCO. | Distilling Localization for Self-Supervised Representation Learning |
Beam splitters are routinely used for generating entanglement. Their entangling properties have been studied extensively, with nonclassicality of the input states a prerequisite for entanglement at the output. Here we quantify the amount of entanglement generated by weakly-reflecting beam splitters, and look for nonclassical states that are not entangled by general beam splitters. We find that inputting highly nonclassical combinations of unpolarized states that are squeezed and displaced onto a beam splitter can still yield separable output states. This result is crucial for understanding the generation of modal entanglement by beam splitters. | Nonclassical states that generate zero entanglement with a beam splitter |
The problem of statistical learning is to construct an accurate predictor of a random variable as a function of a correlated random variable on the basis of an i.i.d. training sample from their joint distribution. Allowable predictors are constrained to lie in some specified class, and the goal is to approach asymptotically the performance of the best predictor in the class. We consider two settings in which the learning agent only has access to rate-limited descriptions of the training data, and present information-theoretic bounds on the predictor performance achievable in the presence of these communication constraints. Our proofs do not assume any separation structure between compression and learning and rely on a new class of operational criteria specifically tailored to joint design of encoders and learning algorithms in rate-constrained settings. | Achievability results for statistical learning under communication constraints |
Nanopore sequencers generate electrical raw signals in real-time while sequencing long genomic strands. These raw signals can be analyzed as they are generated, providing an opportunity for real-time genome analysis. An important feature of nanopore sequencing, Read Until, can eject strands from sequencers without fully sequencing them, which provides opportunities to computationally reduce the sequencing time and cost. However, existing works utilizing Read Until either 1) require powerful computational resources that may not be available for portable sequencers or 2) lack scalability for large genomes, rendering them inaccurate or ineffective. We propose RawHash, the first mechanism that can accurately and efficiently perform real-time analysis of nanopore raw signals for large genomes using a hash-based similarity search. To enable this, RawHash ensures the signals corresponding to the same DNA content lead to the same hash value, regardless of the slight variations in these signals. RawHash achieves an accurate hash-based similarity search via an effective quantization of the raw signals such that signals corresponding to the same DNA content have the same quantized value and, subsequently, the same hash value. We evaluate RawHash on three applications: 1) read mapping, 2) relative abundance estimation, and 3) contamination analysis. Our evaluations show that RawHash is the only tool that can provide high accuracy and high throughput for analyzing large genomes in real-time. When compared to the state-of-the-art techniques, UNCALLED and Sigmap, RawHash provides 1) 25.8x and 3.4x better average throughput and 2) significantly better accuracy for large genomes, respectively. Source code is available at https://github.com/CMU-SAFARI/RawHash. | RawHash: Enabling Fast and Accurate Real-Time Analysis of Raw Nanopore Signals for Large Genomes |
The European X-ray Free Electron Laser (XFEL.EU) is currently being commissioned in Schenefeld, Germany. From 2017 onwards it will provide spatially coherent X-rays of energies between 0.25\,keV and 25\,keV with a unique timing structure. One of the detectors foreseen at XFEL.EU for the soft X-ray regime (energies below 6\,keV) is a quasi column-parallel readout FastCCD developed by Lawrence Berkeley National Lab (LBNL) specifically for the XFEL.EU requirements. Its sensor has 1920$\times$960 pixels of 30\,$\mu$m $\times$30\,$\mu$m size with a beam hole in the middle of the sensor. The camera can be operated in full frame and frame store mode. With the FastCCD a frame rate of up to 120~fps can be achieved, but at XFEL.EU the camera settings are optimized for the 10\,Hz XFEL bunch-mode. The detector has been delivered to XFEL.EU. Results of the performance tests and calibration done using the XFEL.EU detector calibration infrastructure are presented quantifying noise level, gain and energy resolution. | Performance of the LBNL FastCCD for the European XFEL |
We have systematically studied the transport properties of the La$_{2-x}$Y$_{x}$CuO$_{4}$(LYCO) films of T'-phase ($0.05\leq x \leq 0.30$). In this nominally "undoped" system, superconductivity was acquired in certain Y doping range ($0.10\leq x \leq 0.20$). Measurements of resistivity, Hall coefficients in normal states and resistive critical field ($H^\rho_{c2}$)in superconducting states of the T'-LYCO films show the similar behavior as the known Ce-doped n-type cuprate superconductors, indicating the intrinsic electron-doping nature. The charge carriers are induced by oxygen deficiency. Non-superconducting Y-doped Pr- or Nd-based T'-phase cuprate films were also investigated for comparison, suggesting the crucial role of the radii of A-site cations in the origin of superconductivity in the nominally "undoped" cuptates. Based on a reasonable scenario in the microscopic reduction process, we put forward a self-consistent interpretation of these experimental observations. | Origin of superconductivity in nominally "undoped" T'-La$_{2-x}$Y$_{x}$CuO$_{4}$ films |
This article studies an extended Nori and local fundamental group schemes of Abelian varieties. We also discuss the birational invariance of these group schemes and study their behaviour under the Albanese and \'{e}tale morphisms. | On an extension of Nori and local fundamental group schemes |
The class of surfaces in 3-space possessing nontrivial deformations which preserve principal directions and principal curvatures (or, equivalently, the shape operator) was investigated by Finikov and Gambier as far back as in 1933. We review some of the known examples and results, demonstrate the integrability of the corresponding Gauss-Codazzi equations and draw parallels between this geometrical problem and the theory of compatible Poisson brackets of hydrodynamic type. It turns out that coordinate hypersurfaces of the n-orthogonal systems arising in the theory of compatible Poisson brackets of hydrodynamic type must necessarily possess deformations preserving the shape operator. | Surfaces in 3-space possessing nontrivial deformations which preserve the shape operator |
Only three of the dozen central compact objects (CCOs) in supernova remnants (SNRs) show thermal X-ray pulsations due to non-uniform surface temperature (hot-spots). The absence of X-ray pulsations from several unpulsed CCOs has motivated suggestions that they have uniform-temperature carbon atmospheres (UTCAs), which adequately fit their spectra with appropriate neutron star (NS) surface areas. This is in contrast to the two-temperature blackbody or hydrogen atmospheres that also fit well. Here we investigate the applicability of UTCAs to CCOs. We show the following: (i) The phase-averaged spectra of the three pulsed CCOs can also be fitted with a UTCA of the appropriate NS area, despite pulsed CCOs manifestly having non-uniform surface temperature. A good spectral fit is therefore not strong support for the UTCA model of unpulsed CCOs. (ii) An improved spectrum of one unpulsed CCO, previously analyzed with a UTCA, does not allow an acceptable fit. (iii) For two unpulsed CCOs, the UTCA does not allow a distance compatible with the SNR distance. These results imply that, in general, CCOs must have hot, localized regions on the NS surface. We derive new X-ray pulse modulation upper limits on the unpulsed CCOs, and constrain their hot spot sizes and locations. We develop an alternative model that accounts for both the pulsed and unpulsed CCOs: a range of angles between hot spot and rotation axes consistent with an exponential distribution with scale factor $\lambda \sim 20^{\circ}$. We discuss physical mechanisms that could produce such small angles and small hot-spots. | Do Central Compact Objects have Carbon Atmospheres? |
Based on the results of recent surveys, we have constructed a relatively homogeneous set of observational data concerning the chemical and photometric properties of Low Surface Brightness galaxies (LSBs). We have compared the properties of this data set with the predictions of models of the chemical and spectrophotometric evolution of LSBs. The basic idea behind the models, i.e. that LSBs are similar to 'classical' High Surface Brightness spirals except for a larger angular momentum, is found to be consistent with the results of their comparison with these data. However, some observed properties of the LSBs (e.g. their colours, and specifically the existence of red LSBs) as well as the large scatter in these properties, cannot be reproduced by the simplest models with smoothly evolving star formation rates over time. We argue that the addition of bursts and/or truncations in the star formation rate histories can alleviate that discrepancy. | Chemical and spectrophotometric evolution of Low Surface Brightness galaxies |
We introduce the generalization of the Slave-Spin Mean-Field method to broken-symmetry phases. Through a variational approach we derive the single-particle energy shift in the mean-field equations which generates the appropriate self-consistent field responsible for the stabilization of the broken symmetry. With this correction the different flavours of the slave-spin mean-field are actually the same method and they give identical results to Kotliar-Ruckenstein slave-bosons and to the Gutzwiller approximation. We apply our formalism to the N\'eel antiferromagnetic state and study it in multi-orbital models as a function of the number of orbitals and Hund's coupling strength, providing phase diagrams in the interaction-doping plane. We show that the doped antiferromagnet in proximity of half-filling is typically unstable towards insulator-metal and magnetic-non magnetic phase separation. Hund's coupling extends the range of this antiferromagnet, and favors its phase separation. | Slave-spin mean field for broken-symmetry states: N\'eel antiferromagnetism and its phase separation in multi-orbital Hubbard models |
The magnetoelectroluminescence of conjugated organic polymer films is widely accepted to arise from a polaron pair mechanism, but their magnetoconductance is less well understood. Here we derive a new relationship between the experimentally measurable magnetoelectroluminescence and magnetoconductance and the theoretically calculable singlet yield of the polaron pair recombination reaction. This relationship is expected to be valid regardless of the mechanism of the magnetoconductance, provided the mobilities of the free polarons are independent of the applied magnetic field (i.e., provided one discounts the possibility of spin-dependent transport). We also discuss the semiclassical calculation of the singlet yield of the polaron pair recombination reaction for materials such as poly(2,5-dioctyloxy-paraphenylene vinylene) (DOO-PPV), the hyperfine fields in the polarons of which can be extracted from light-induced electron spin resonance measurements. The resulting theory is shown to give good agreement with experimental data for both normal (H-) and deuterated (D-) DOO-PPV over a wide range of magnetic field strengths once singlet-triplet dephasing is taken into account. Without this effect, which has not been included in any previous simulation of magnetoelectroluminescence, it is not possible to reproduce the experimental data for both isotopologues in a consistent fashion. Our results also indicate that the magnetoconductance of DOO-PPV cannot be solely due to the effect of the magnetic field on the dissociation of polaron pairs. | Magnetoelectroluminescence in organic light emitting diodes |
This paper investigates the Poisson geometry associated to a cluster algebra over the complex numbers, and its relationship to compatible torus actions. We show, under some assumptions, that each Noetherian cluster algebra has only finitely many torus invariant Poisson prime ideals and we show how to obtain using the exchange matrix of an initial seed. In fact, these ideals are independent of the choice of compatible Poisson structure. In many interesting cases the ideals can be described more explicitly. | Toric Poisson Ideals in Cluster Algebras |
We present a high-resolution study of a massive dense core JCMT 18354-0649S with the Submillimeter Array. The core is mapped with continuum emission at 1.3 mm, and molecular lines including CH$_{3}$OH ($5_{23}$-$4_{13}$) and HCN (3-2). The dust core detected in the compact configuration has a mass of $47 M_{\odot}$ and a diameter of $2\arcsec$ (0.06 pc), which is further resolved into three condensations with a total mass of $42 M_{\odot}$ under higher spatial resolution. The HCN (3-2) line exhibits asymmetric profile consistent with infall signature. The infall rate is estimated to be $2.0\times10^{-3} M_{\odot}\cdot$yr$^{-1}$. The high velocity HCN (3-2) line wings present an outflow with three lobes. Their total mass is $12 M_{\odot}$ and total momentum is $121 M_{\odot}\cdot$km s$^{-1}$, respectively. Analysis shows that the N-bearing molecules especially HCN can trace both inflow and outflow. | Infall and outflow detections in a massive core JCMT 18354-0649S |
Let $X$ be an arbitrary non-compact hyperbolic Riemann surface, that is, not $\mathbb C$ or $\mathbb C^*$. Given a tuple of holomorphic differentials $\boldsymbol q=(q_2,\cdots,q_n)$ on $X$, one can define a Higgs bundle $(\mathbb{K}_{X,n},\theta(\boldsymbol q))$ in the Hitchin section. We show there exists a harmonic metric $h$ on $(\mathbb{K}_{X,n},\theta(\boldsymbol q))$ satisfying (i) $h$ weakly dominates $h_X$; (ii) $h$ is compatible with the real structure. Here $h_X$ is the Hermitian metric on $\mathbb{K}_{X,n}$ induced by the conformal complete hyperbolic metric $g_X$ on $X.$ Moreover, when $q_i(i=2,\cdots,n)$ are bounded with respect to $g_X$, we show such a harmonic metric on $(\mathbb{K}_{X,n},\theta(\boldsymbol q))$ satisfying (i)(ii) uniquely exists. With similar techniques, we show the existence of harmonic metrics for $SO(n,n+1)$-Higgs bundles in Collier's component and $Sp(4,\mathbb R)$-Higgs bundles in Gothen's component over $X$, under some mild assumptions. | Higgs bundles in the Hitchin section over non-compact hyperbolic surfaces |
In recent work, we derived the long-distance confining dynamics of certain QCD-like gauge theories formulated on small $S^1 \times \R^3$ based on symmetries, an index theorem, and Abelian duality. Here, we give the microscopic derivation. The solution reveals a new mechanism of confinement in QCD(adj) in the regime where we have control over both perturbative and nonperturbative aspects. In particular, consider SU(2) QCD(adj) theory with $1 \leq n_f \leq 4$ Majorana fermions, a theory which undergoes gauge symmetry breaking at small $S^1$. If the magnetic charge of the BPS monopole is normalized to unity, we show that confinement occurs due to condensation of objects with magnetic charge 2, not 1. Because of index theorems, we know that such an object cannot be a two identical monopole configuration. Its net topological charge must vanish, and hence it must be topologically indistinguishable from the perturbative vacuum. We construct such non-self-dual topological excitations, the magnetically charged, topologically null molecules of a BPS monopole and ${\bar{\rm KK}}$ antimonopole, which we refer to as magnetic bions. An immediate puzzle with this proposal is the apparent Coulomb repulsion between the BPS-${\bar{\rm KK}}$ pair. An attraction which overcomes the Coulomb repulsion between the two is induced by $2n_f$-fermion exchange. Bion condensation is also the mechanism of confinement in $\N=1$ SYM on the same four-manifold. The SU(N) generalization hints a possible hidden integrability behind nonsupersymmetric QCD of affine Toda type, and allows us to analytically compute the mass gap in the gauge sector. We currently do not know the extension to $\R^4$. | Magnetic bion condensation: A new mechanism of confinement and mass gap in four dimensions |
Given a finite sequence of vectors $\mathcal F_0$ in $\C^d$ we characterize in a complete and explicit way the optimal completions of $\mathcal F_0$ obtained by adding a finite sequence of vectors with prescribed norms, where optimality is measured with respect to majorization (of the eigenvalues of the frame operators of the completed sequence). Indeed, we construct (in terms of a fast algorithm) a vector - that depends on the eigenvalues of the frame operator of the initial sequence $\cF_0$ and the sequence of prescribed norms - that is a minimum for majorization among all eigenvalues of frame operators of completions with prescribed norms. Then, using the eigenspaces of the frame operator of the initial sequence $\cF_0$ we describe the frame operators of all optimal completions for majorization. Hence, the concrete optimal completions with prescribed norms can be obtained using recent algorithmic constructions related with the Schur-Horn theorem. The well known relation between majorization and tracial inequalities with respect to convex functions allow to describe our results in the following equivalent way: given a finite sequence of vectors $\mathcal F_0$ in $\C^d$ we show that the completions with prescribed norms that minimize the convex potential induced by a strictly convex function are structural minimizers, in the sense that they do not depend on the particular choice of the convex potential. | Optimal frame completions with prescribed norms for majorization |
This paper presents LiteEval, a simple yet effective coarse-to-fine framework for resource efficient video recognition, suitable for both online and offline scenarios. Exploiting decent yet computationally efficient features derived at a coarse scale with a lightweight CNN model, LiteEval dynamically decides on-the-fly whether to compute more powerful features for incoming video frames at a finer scale to obtain more details. This is achieved by a coarse LSTM and a fine LSTM operating cooperatively, as well as a conditional gating module to learn when to allocate more computation. Extensive experiments are conducted on two large-scale video benchmarks, FCVID and ActivityNet, and the results demonstrate LiteEval requires substantially less computation while offering excellent classification accuracy for both online and offline predictions. | LiteEval: A Coarse-to-Fine Framework for Resource Efficient Video Recognition |
Many tasks in computer vision and graphics fall within the framework of conditional image synthesis. In recent years, generative adversarial nets (GANs) have delivered impressive advances in quality of synthesized images. However, it remains a challenge to generate both diverse and plausible images for the same input, due to the problem of mode collapse. In this paper, we develop a new generic multimodal conditional image synthesis method based on Implicit Maximum Likelihood Estimation (IMLE) and demonstrate improved multimodal image synthesis performance on two tasks, single image super-resolution and image synthesis from scene layouts. We make our implementation publicly available. | Multimodal Image Synthesis with Conditional Implicit Maximum Likelihood Estimation |
Motivated by recent work on deep neural network (DNN)-based image compression methods showing potential improvements in image quality, savings in storage, and bandwidth reduction, we propose to perform image understanding tasks such as classification and segmentation directly on the compressed representations produced by these compression methods. Since the encoders and decoders in DNN-based compression methods are neural networks with feature-maps as internal representations of the images, we directly integrate these with architectures for image understanding. This bypasses decoding of the compressed representation into RGB space and reduces computational cost. Our study shows that accuracies comparable to networks that operate on compressed RGB images can be achieved while reducing the computational complexity up to $2\times$. Furthermore, we show that synergies are obtained by jointly training compression networks with classification networks on the compressed representations, improving image quality, classification accuracy, and segmentation performance. We find that inference from compressed representations is particularly advantageous compared to inference from compressed RGB images for aggressive compression rates. | Towards Image Understanding from Deep Compression without Decoding |
We show that there is an infinite set of primes $\mathcal{P}$ of density one, such that the family of \textit{all} Cayley graphs of $\mathrm{SL}(2,p)$%, $p\in \mathcal{P}$, is a family of expanders. | Strong uniform expansion in $\mathrm{SL}(2,p)$ |
COnstraint-Based Reconstruction and Analysis (COBRA) provides a molecular mechanistic framework for integrative analysis of experimental data and quantitative prediction of physicochemically and biochemically feasible phenotypic states. The COBRA Toolbox is a comprehensive software suite of interoperable COBRA methods. It has found widespread applications in biology, biomedicine, and biotechnology because its functions can be flexibly combined to implement tailored COBRA protocols for any biochemical network. Version 3.0 includes new methods for quality controlled reconstruction, modelling, topological analysis, strain and experimental design, network visualisation as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data. New multi-lingual code integration also enables an expansion in COBRA application scope via high-precision, high-performance, and nonlinear numerical optimisation solvers for multi-scale, multi-cellular and reaction kinetic modelling, respectively. This protocol can be adapted for the generation and analysis of a constraint-based model in a wide variety of molecular systems biology scenarios. This protocol is an update to the COBRA Toolbox 1.0 and 2.0. The COBRA Toolbox 3.0 provides an unparalleled depth of constraint-based reconstruction and analysis methods. | Creation and analysis of biochemical constraint-based models: the COBRA Toolbox v3.0 |
This paper investigates the mathematical properties of a stochastic version of the balanced 2D thermal quasigeostrophic (TQG) model of potential vorticity dynamics. This stochastic TQG model is intended as a basis for parametrisation of the dynamical creation of unresolved degrees of freedom in computational simulations of upper ocean dynamics when horizontal buoyancy gradients and bathymetry affect the dynamics, particularly at the submesoscale (250m--10km). Specifically, we have chosen the SALT (Stochastic Advection by Lie Transport) algorithm introduced in [1] and applied in [2,3] as our modelling approach. The SALT approach preserves the Kelvin circulation theorem and an infinite family of integral conservation laws for TQG. The goal of the SALT algorithm is to quantify the uncertainty in the process of up-scaling, or coarse-graining of either observed or synthetic data at fine scales, for use in computational simulations at coarser scales. The present work provides a rigorous mathematical analysis of the solution properties of the thermal quasigeostrophic (TQG) equations with stochastic advection by Lie transport (SALT) [4,5]. | Theoretical analysis and numerical approximation for the stochastic thermal quasi-geostrophic model |
Piezoresponse force microscopy (PFM) is a powerful tool for probing nanometer-scale ferroelectric and piezoelectric properties. Hysteretic switching of the phase and amplitude of the PFM response are believed to be the hallmark of ferroelectric and piezoelectric behavior respectively. However, the application of PFM is limited by the fact that similar hysteretic effects may also arise from mechanisms not related to ferroelectricity or piezoelectricity. In this paper we report our studies on regular glass slides that show ferroelectric-like signal without being ferroelectric and frequently used as a substrate in PFM experiments. We demonstrate how the substrates and other environmental factors like relative humidity and experimental conditions may influence the PFM results on novel materials. | The role of substrates and environment in piezoresponse force microscopy: A case study with regular glass slides |
Starting from the T-Q equations of the open spin-1 XXZ quantum spin chain with general integrable boundary terms, for values of the boundary parameters which satisfy a certain constraint, we derive a set of nonlinear integral equations (NLIEs) for the inhomogeneous open spin-1 XXZ chain. By taking the continuum limit of these NLIEs, and working in analogy with the open spin-1 XXZ chain with diagonal boundary terms, we compute the boundary and the Casimir energies of the corresponding supersymmetric sine-Gordon (SSG) model. We also present an analytical result for the effective central charge in the ultraviolet (UV) limit. | On the NLIE of (inhomogeneous) open spin-1 XXZ chain with general integrable boundary terms |
In quantum mechanics, a classical particle is raised to a wave-function, thereby acquiring many more degrees of freedom. For instance, in the semi-classical regime, while the position and momentum expectation values follow the classical trajectory, the uncertainty of a wave-packet can evolve and beat independently. We use this insight to revisit the dynamics of a 1d particle in a time-dependent harmonic well. One can solve it by considering time reparameterizations and the Virasoro group action to map the system to the harmonic oscillator with constant frequency. We prove that identifying such a simplifying time variable is naturally solved by quantizing the system and looking at the evolution of the width of a Gaussian wave-packet. We further show that the Ermakov-Lewis invariant for the classical evolution in a time-dependent harmonic potential is actually the quantum uncertainty of a Gaussian wave-packet. This naturally extends the classical Ermakov-Lewis invariant to a constant of motion for quantum systems following Schrodinger equation. We conclude with a discussion of potential applications to quantum gravity and quantum cosmology. | Quantum Uncertainty as an Intrinsic Clock |
The first-generation of BrainScaleS, also referred to as BrainScaleS-1, is a neuromorphic system for emulating large-scale networks of spiking neurons. Following a "physical modeling" principle, its VLSI circuits are designed to emulate the dynamics of biological examples: analog circuits implement neurons and synapses with time constants that arise from their electronic components' intrinsic properties. It operates in continuous time, with dynamics typically matching an acceleration factor of 10000 compared to the biological regime. A fault-tolerant design allows it to achieve wafer-scale integration despite unavoidable analog variability and component failures. In this paper, we present the commissioning process of a BrainScaleS-1 wafer module, providing a short description of the system's physical components, illustrating the steps taken during its assembly and the measures taken to operate it. Furthermore, we reflect on the system's development process and the lessons learned to conclude with a demonstration of its functionality by emulating a wafer-scale synchronous firing chain, the largest spiking network emulation ran with analog components and individual synapses to date. | From Clean Room to Machine Room: Commissioning of the First-Generation BrainScaleS Wafer-Scale Neuromorphic System |
Beliefs are important determinants of an individual's choices and economic outcomes, so understanding how they comove and differ across individuals is of considerable interest. Researchers often rely on surveys that report individual beliefs as qualitative data. We propose using a Bayesian hierarchical latent class model to analyze the comovements and observed heterogeneity in categorical survey responses. We show that the statistical model corresponds to an economic structural model of information acquisition, which guides interpretation and estimation of the model parameters. An algorithm based on stochastic optimization is proposed to estimate a model for repeated surveys when responses follow a dynamic structure and conjugate priors are not appropriate. Guidance on selecting the number of belief types is also provided. Two examples are considered. The first shows that there is information in the Michigan survey responses beyond the consumer sentiment index that is officially published. The second shows that belief types constructed from survey responses can be used in a subsequent analysis to estimate heterogeneous returns to education. | Latent Dirichlet Analysis of Categorical Survey Responses |
Gesture typing is a method of text entry that is ergonomically well-suited to the form factor of touchscreen devices and allows for much faster input than tapping each letter individually. The QWERTY keyboard was, however, not designed with gesture input in mind and its particular layout results in a high frequency of gesture recognition errors. In this paper, we describe a new approach to quantifying the frequency of gesture input recognition errors through the use of modeling and simulating realistically imperfect user input. We introduce new methodologies for modeling randomized gesture inputs, efficiently reconstructing words from gestures on arbitrary keyboard layouts, and using these in conjunction with a frequency weighted lexicon to perform Monte Carlo evaluations of keyboard error rates or any other arbitrary metric. An open source framework, Dodona, is also provided that allows for these techniques to be easily employed and customized in the evaluation of a wide spectrum of possible keyboards and input methods. Finally, we perform an optimization procedure over permutations of the QWERTY keyboard to demonstrate the effectiveness of this approach and describe ways that future analyses can build upon these results. | A Monte Carlo Simulation Approach for Quantitatively Evaluating Keyboard Layouts for Gesture Input |
Recent nuclear magnetic resonance studies [A. Pustogow {\it et al.}, arXiv:1904.00047] have challenged the prevalent chiral triplet pairing scenario proposed for Sr$_2$RuO$_4$. To provide guidance from microscopic theory as to which other pair states might be compatible with the new data, we perform a detailed theoretical study of spin-fluctuation mediated pairing for this compound. We map out the phase diagram as a function of spin-orbit coupling, interaction parameters, and band-structure properties over physically reasonable ranges, comparing when possible with photoemission and inelastic neutron scattering data information. We find that even-parity pseudospin singlet solutions dominate large regions of the phase diagram, but in certain regimes spin-orbit coupling favors a near-nodal odd-parity triplet superconducting state, which is either helical or chiral depending on the proximity of the $\gamma$ band to the van Hove points. A surprising near-degeneracy of the nodal $s^\prime$- and $d_{x^2-y^2}$-wave solutions leads to the possibility of a near-nodal time-reversal symmetry broken $s^\prime+id_{x^2-y^2}$ pair state. Predictions for the temperature dependence of the Knight shift for fields in and out of plane are presented for all states. | Knight Shift and Leading Superconducting Instability From Spin Fluctuations in Sr2RuO4 |
The $K$-receiver degraded broadcast channel with secrecy outside a bounded range is studied, in which a transmitter sends $K$ messages to $K$ receivers, and the channel quality gradually degrades from receiver $K$ to receiver 1. Each receiver $k$ is required to decode message $W_1,\ldots,W_k$, for $1\leq k\leq K$, and to be kept ignorant of $W_{k+2},\ldots,W_K$, for $k=1,\ldots, K-2$. Thus, each message $W_k$ is kept secure from receivers with at least two-level worse channel quality, i.e., receivers 1, $\ldots$, $k-2$. The secrecy capacity region is fully characterized. The achievable scheme designates one superposition layer to each message with binning employed for each layer. Joint embedded coding and binning are employed to protect all upper-layer messages from lower-layer receivers. Furthermore, the scheme allows adjacent layers to share rates so that part of the rate of each message can be shared with its immediate upper-layer message to enlarge the rate region. More importantly, an induction approach is developed to perform Fourier-Motzkin elimination of $2K$ variables from the order of $K^2$ bounds to obtain a close-form achievable rate region. An outer bound is developed that matches the achievable rate region, whose proof involves recursive construction of the rate bounds and exploits the intuition gained from the achievable scheme. | Degraded Broadcast Channel with Secrecy Outside a Bounded Range |
In this investigation the response to the scintillation light generated by through-going cosmic muons in liquid argon (LAr) was measured by two light guide technologies and two readout technologies after five weeks of running in the TallBo dewar at Fermilab. The response was remeasured after the dewar was drained of LAr, refilled, and then run again for an additional four weeks. After the dewar was refilled, there was clear evidence that the scintillation signal had dropped significantly. The two light guide technologies were developed at Indiana University and MIT/Fermilab. The two readout technologies were boards that passively or actively ganged 12 Hamamatsu MPPCs. Two possible explanations were identified for the degraded signal: the response of the two light guide technologies degraded due to damage caused by thermal cycling, and/or unknown differences in the trace residual Xe contamination in the fills of LAr led to the observed drop in scintillation light. Neither absorption nor quenching by N2, O2, and H2O contamination can account for the degradation. Neither the individual Hamamatsu MPPCs nor the passive/active ganging boards appear to have been affected by the thermal cycling. The path length distributions of the cosmics traversing the dewar appear quite similar in both event samples. | Differences in the response of two light guide technologies and two readout technologies after an exchange of liquid argon in the dewar |
Data augmentation has recently emerged as an essential component of modern training recipes for visual recognition tasks. However, data augmentation for video recognition has been rarely explored despite its effectiveness. Few existing augmentation recipes for video recognition naively extend the image augmentation methods by applying the same operations to the whole video frames. Our main idea is that the magnitude of augmentation operations for each frame needs to be changed over time to capture the real-world video's temporal variations. These variations should be generated as diverse as possible using fewer additional hyper-parameters during training. Through this motivation, we propose a simple yet effective video data augmentation framework, DynaAugment. The magnitude of augmentation operations on each frame is changed by an effective mechanism, Fourier Sampling that parameterizes diverse, smooth, and realistic temporal variations. DynaAugment also includes an extended search space suitable for video for automatic data augmentation methods. DynaAugment experimentally demonstrates that there are additional performance rooms to be improved from static augmentations on diverse video models. Specifically, we show the effectiveness of DynaAugment on various video datasets and tasks: large-scale video recognition (Kinetics-400 and Something-Something-v2), small-scale video recognition (UCF- 101 and HMDB-51), fine-grained video recognition (Diving-48 and FineGym), video action segmentation on Breakfast, video action localization on THUMOS'14, and video object detection on MOT17Det. DynaAugment also enables video models to learn more generalized representation to improve the model robustness on the corrupted videos. | Exploring Temporally Dynamic Data Augmentation for Video Recognition |
The alignment between satellite and central galaxies serves as a proxy for addressing the issue of galaxy formation and evolution and has been investigated abundantly in observations and theoretical works. Most scenarios indicate that the satellites preferentially locate along the major axis of their central galaxy. Recent work shows that the strength of alignment signals depends on large-scale environment in observations. We use the publicly-released data from EAGLE to figure out whether the same effect can be found in the hydrodynamic simulation. We found much stronger environmental dependency of alignment signal in simulation. And we also explore change of alignments to address the formation of this effects. | Alignment between Satellite and Central Galaxies in the EAGLE Simulation: Dependence on the Large-Scale Environments |
Sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particle-resolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosol properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations. | Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations |
We propose a method for reconstruction of the optical potential from scattering data. The algorithm is a two-step procedure. In the first step the real part of the potential is determined analytically via solution of the Marchenko equation. At this point we use a diagonal Pad\'{e} approximant of the corresponding unitary $S$-matrix. In the second step the imaginary part of the potential is determined via the phase equation of the variable phase approach. We assume that the real and the imaginary parts of the optical potential are proportional. We use the phase equation to calculate the proportionality coefficient. A numerical algorithm is developed for a single and for coupled partial waves. The developed procedure is applied to analysis of $^{1}S_{0}$ $NN$, $^{3}SD_{1}$ $NN$, $P31$ $\pi^{-} N$ and $S01$ $K^{+}N$ data. | Reconstruction of the optical potential from scattering data |
Chaos, namely exponential sensitivity to initial conditions, is generally considered a nuisance, inasmuch as it prevents long-term predictions in physical systems. Here, we present an easily accessible approach to undo deterministic chaos and tailor ray trajectories in arbitrary two-dimensional optical billiards, by introducing spatially varying refractive index therein. A new refractive index landscape is obtained by a conformal mapping, which makes the trajectories of the chaotic billiard fully predictable and the billiard fully integrable. Moreover, trajectory rectification can be pushed a step further by relating chaotic billiards with non-Euclidean geometries. Two examples are illustrated by projecting billiards built on a sphere as well as the deformed spacetime outside a Schwarzschild black hole, which respectively lead to all periodic orbits and spiraling trajectories in the resulting 2D billiards/cavities. An implementation of our method is proposed, which enables real-time control of chaos and could further contribute to a wealth of potential applications in the domain of optical microcavities. | Ray engineering from chaos to order in two-dimensional optical cavities |
In this article we demonstrate a way to extend the AbC (approximation by conjugation) method invented by Anosov and Katok from the smooth category to the category of real-analytic diffeomorphisms on the torus. We present a general framework for such constructions and prove several results. In particular, we construct minimal but not uniquely ergodic diffeomorphisms and nonstandard real-analytic realizations of toral translations. | Real-analytic AbC constructions on the torus |
We study the cluster, the backbone and the elastic backbone structures of the multiple invasion percolation for both the perimeter and the optimized versions. We investigate the behavior of the mass, the number of red sites (i. e., sites through which all the current passes) and loops of those structures. Their corresponding scaling exponents are also estimated. By construction, the mass of the optimized model scales exactly with the gyration radius of the cluster - we verify that this also happens to the backbone. Our simulation shows that the red sites almost disappear, indicating that the cluster has achieved a high degree of connectivity. | Cluster, backbone and elastic backbone structures of the multiple invasion percolation |
We introduce a generic visual descriptor, termed as distribution aware retinal transform (DART), that encodes the structural context using log-polar grids for event cameras. The DART descriptor is applied to four different problems, namely object classification, tracking, detection and feature matching: (1) The DART features are directly employed as local descriptors in a bag-of-features classification framework and testing is carried out on four standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS, NCaltech-101). (2) Extending the classification system, tracking is demonstrated using two key novelties: (i) For overcoming the low-sample problem for the one-shot learning of a binary classifier, statistical bootstrapping is leveraged with online learning; (ii) To achieve tracker robustness, the scale and rotation equivariance property of the DART descriptors is exploited for the one-shot learning. (3) To solve the long-term object tracking problem, an object detector is designed using the principle of cluster majority voting. The detection scheme is then combined with the tracker to result in a high intersection-over-union score with augmented ground truth annotations on the publicly available event camera dataset. (4) Finally, the event context encoded by DART greatly simplifies the feature correspondence problem, especially for spatio-temporal slices far apart in time, which has not been explicitly tackled in the event-based vision domain. | DART: Distribution Aware Retinal Transform for Event-based Cameras |
We present the results of our observing program on near infrared spectroscopy of high-redshift quasars which have been undertaken both at Kitt Peak National Observatory and at Mauna Kea Observatory, University of Hawaii. These data are utilized for studying the epoch of major star formation in high-redshift quasar hosts. | The Epoch of Major Star Formation in High-z Quasar Hosts |
Let $k$ be a field of arbitrary characteristic. Nakai (1978) proved a structure theorem for $k$-domains admitting a nontrivial locally finite iterative higher derivation when $k$ is algebraically closed. In this paper, we generalize Nakai's theorem to cover the case where $k$ is not algebraically closed. As a consequence, we obtain a cancellation theorem of the following form: Let $A$ and $A'$ be finitely generated $k$-domains with $A[x]\simeq _kA'[x]$. If $A$ and $\bar{k}\otimes _kA$ are UFDs and $\mathop{\rm trans.deg}\nolimits_kA=2$, then we have $A\simeq _kA'$. This generalizes the cancellation theorem of Crachiola (2009). | A generalization of Nakai's theorem on locally finite iterative higher derivations |
We illustrate how the different kinds of constraints acting on an impulsive mechanical system can be clearly described in the geometric setup given by the configuration space--time bundle $\pi_t:\mathcal{M} \to \mathbb{E}$ and its first jet extension $\pi: J_1 \to \mathcal{M}$ in a way that ensures total compliance with axioms and invariance requirements of Classical Mechanics. We specify the differences between geometric and constitutive characterizations of a constraint. We point out the relevance of the role played by the concept of frame of reference, underlining when the frame independence is mandatorily required and when a choice of a frame is an inescapable need. The thorough rationalization allows the introduction of unusual but meaningful kinds of constraints, such as unilateral kinetic constraints or breakable constraints, and of new theoretical aspects, such as the possible dependence of the impulsive reaction by the active forces acting on the system. | A survey about framing the bases of Impulsive Mechanics of constrained systems into a jet-bundle geometric context |
Cryogenic CMOS technology (cryo-CMOS) offers a scalable solution for quantum device interface fabrication. Several previous works have studied the characterization of CMOS technology at cryogenic temperatures for various process nodes. However, CMOS characteristics for various width/length (W/L) ratios and under different bias conditions still require further research. In addition, no previous works have produced an integrated modeling process for cryo-CMOS technology. In this paper, the results of characterization of Semiconductor Manufacturing International Corporation (SMIC) 0.18 {\mu}m CMOS technology at cryogenic temperatures (varying from 300 K to 4.2 K) are presented. Measurements of thin- and thick-oxide NMOS and PMOS devices with different W/L ratios are taken under four distinct bias conditions and at different temperatures. The temperature-dependent parameters are revised and an advanced CMOS model is proposed based on BSIM3v3 at the liquid nitrogen temperature (LNT). The proposed model ensures precision at the LNT and is valid for use in an industrial tape-out process. The proposed method presents a calibration approach for BSIM3v3 that is available at different temperature intervals. | MOSFET Characterization and Modeling at Cryogenic Temperatures |
Online malware scanners are one of the best weapons in the arsenal of cybersecurity companies and researchers. A fundamental part of such systems is the sandbox that provides an instrumented and isolated environment (virtualized or emulated) for any user to upload and run unknown artifacts and identify potentially malicious behaviors. The provided API and the wealth of information inthe reports produced by these services have also helped attackers test the efficacy of numerous techniques to make malware hard to detect.The most common technique used by malware for evading the analysis system is to monitor the execution environment, detect the presence of any debugging artifacts, and hide its malicious behavior if needed. This is usually achieved by looking for signals suggesting that the execution environment does not belong to a the native machine, such as specific memory patterns or behavioral traits of certain CPU instructions. In this paper, we show how an attacker can evade detection on such online services by incorporating a Proof-of-Work (PoW) algorithm into a malware sample. Specifically, we leverage the asymptotic behavior of the computational cost of PoW algorithms when they run on some classes of hardware platforms to effectively detect a non bare-metal environment of the malware sandbox analyzer. To prove the validity of this intuition, we design and implement the POW-HOW framework, a tool to automatically implement sandbox detection strategies and embed a test evasion program into an arbitrary malware sample. Our empirical evaluation shows that the proposed evasion technique is durable, hard to fingerprint, and reduces existing malware detection rate by a factor of 10. Moreover, we show how bare-metal environments cannot scale with actual malware submissions rates for consumer services. | POW-HOW: An enduring timing side-channel to evade online malware sandboxes |
For standard interactions of neutrinos with matter bimagic baseline of length about 2540 Km is known to be suitable for getting good discovery limits of neutrino mass hierarchy, $\sin^2 \theta_{13}$ and CP violation in the $\nu_e \rightarrow \nu_{\mu}$ oscillation channel. We discuss how even in presence of non-standard interactions (NSIs) of neutrinos with matter this baseline is found to be suitable for getting these discovery limits. This is because even in presence of NSIs one could get the $\nu_e \rightarrow \nu_\mu $ oscillation probability to be almost independent of CP violating phase $\delta$ and $\theta_{13}$ for one hierarchy and highly dependent on these two for the other hierarchy over certain parts of neutrino energy range. For another certain part of the energy range the reverse of this happens with respect to the hierarchies. We present the discovery limits of NSIs also in the same neutrino energy range. However, as with the increase of neutrino energy the NSI effect in the above oscillation probability gets relatively more pronounced in comparison to the vacuum oscillation parameters, so we consider higher neutrino energy range also for getting better discovery limits of NSIs. Analysis presented here for 2540 Km could also be implemented for longer bimagic baseline $> 6000$ Km. | Non-standard interactions and bimagic baseline for neutrino oscillations |
A general expression for the cross sections of inelastic collisions of fast (including relativistic) multicharged ions with atoms which is based on the genelazition of the eikonal approximation is derived. This expression is applicable for wide range of collision energy and has the standard nonrelativistic limit and in the ultrarelativistic limit coincides with the Baltz's exact solution ~\cite{art13} of the Dirac equation. As an application of the obtained result the following processes are calculated: the excitation and ionization cross sections of hydrogenlike atom; the single and double excitation and ionization of heliumlike atom; the multiply ionization of neon and argon atoms; the probability and cross section of K-vacancy production in the relativistic $U^{92+} - U^{91+}$ collision. The simple analytic formulae for the cross sections of inelastic collisions and the recurrence relations between the ionization cross sections of different multiplicities are also obtained. Comparison of our results with the experimental data and the results of other calculations are given. | Inelastic Processes in the Collision of Relativistic Highly Charged Ions with Atoms |
The turbulent flow of a fluid carrying trace amounts of a condensable species through a differentially cooled vertical channel geometry is simulated using single-phase direct numerical simulations. The release of latent heat during condensation is modeled by interdependent temperature and vapor concentration source terms governing the relation between the removal of excess vapor from the system and the associated local increase in fluid temperature. A coupling between condensation and turbulence is implemented via solutal and thermal buoyancy. When compared to simulations of an identical system without phase transition modeling, the modifications of the subcooled boundary layer due to the transient and highly localized release of latent heat could be observed. A separate analysis of fluid before and after phase transition events shows a clear increase in post-interaction streak spacing, with the release of latent heat during condensation events opposing the cooling effect of the channel wall and the associated damping of turbulence. | Condensation-induced flow structure modifications in turbulent channel flow investigated in direct numerical simulations |
We study the double ionization of atoms subjected to circularly polarized (CP) laser pulses. We analyze two fundamental ionization processes: the sequential (SDI) and non-sequential (NSDI) double ionization in the light of the rotating frame (RF) which naturally embeds nonadiabatic effects in CP pulses. We use and compare two adiabatic approximations: The adiabatic approximation in the laboratory frame (LF) and the adiabatic approximation in the RF. The adiabatic approximation in the RF encapsulates the energy variations of the electrons on subcycle timescales happening in the LF and this, by fully taking into account the ion-electron interaction. This allows us to identify two nonadiabatic effects including the lowering of the threshold intensity at which over-the-barrier ionization happens and the lowering of the ionization time of the electrons. As a consequence, these nonadiabatic effects facilitate over-the-barrier ionization and recollision-induced ionizations. We analyze the outcomes of these nonadiabatic effects on the recollision mechanism. We show that the laser envelope plays an instrumental role in a recollision channel in CP pulses at the heart of NSDI. | Nonadiabatic effects in the double ionization of atoms driven by a circularly polarized laser pulse |
Using nonperturbative results obtained recently for an uniformly accelerated Unruh-DeWitt detector, we discover new features in the dynamical evolution of the detector's internal degree of freedom, and identified the Unruh effect derived originally from time-dependent perturbation theory as operative in the ultra-weak coupling and ultra-high acceleration limits. The mutual interaction between the detector and the field engenders entanglement between them, and tracing out the field leads to a mixed state of the detector even for a detector at rest in Minkowski vacuum. Our findings based on this exact solution shows clearly the differences from the ordinary result where the quantum field's backreaction is ignored in that the detector no longer behaves like a perfect thermometer. From a calculation of the evolution of the reduced density matrix of the detector, we find that the transition probability from the initial ground state over an infinitely long duration of interaction derived from time-dependent perturbation theory is existent in the exact solution only in transient under special limiting conditions corresponding to the Markovian regime. Furthermore, the detector at late times never sees an exact Boltzmann distribution over the energy eigenstates of the free detector, thus in the non-Markovian regime covering a wider range of parameters the Unruh temperature cannot be identified inside the detector. | Backreaction and Unruh effect: New insights from exact solutions of uniformly accelerated detectors |
Drones are attracting increasing attention in varieties of research fields because of their flexibility and are expected to be applied to a wide range of potential applications, among which the super-high-resolution video surveillance system using drones especially gains the authors research attention. Surveillance systems using cameras with fixed locations always suffer the blind spots due to the blockage or inappropriate deployments. Instead, by using the drones equipped with cameras, the surveillance performance can be drastically improved due to their high mobilities. The video quality is also a key factor of the surveillance performance. In face recognition, one of the most important surveillance applications, the uncompressed video can greatly improve the detection accuracy, but it is difficult to transmit uncompressed video in real time due to the huge data sizes. To address the issue, we propose to use the ultra-high speed mmWave communication for the video transmission from drones. Moreover, due to the limited battery energy and computing power in drones, we introduce the edge computing and propose to offload all the computation from the drones to the ground station. In addition, a proof-of-concept prototype hardware of the proposed uncompressed 4K video transmission system from drones through mmWave is developed, and the experiments results are consistent with the system design expectations. | Proof-of-Concept of Uncompressed 4K Video Transmission from Drone through mmWave |
We report on our project to find explicit examples of $K3$ surfaces having real or complex multiplication. Our strategy is to search through the arithmetic consequences of RM and CM. In order to do this, an efficient method is needed for point counting on surfaces defined over finite fields. For this, we describe algorithms that are $p$-adic in nature. | Point counting on $K3$ surfaces and an application concerning real and complex multiplication |
The 10-electron generalized relativistic effective core potential and the corresponding correlation spin-orbital basis sets are generated for the Ra atom and the relativistic coupled cluster calculations for the RaO molecule are performed. The main goal of the study is to evaluate the P,T-odd parameter X characterized by the molecular electronic structure and corresponding to a "volume effect" in the interaction of the ^{225}Ra nucleus Schiff moment with electronic shells of RaO. Our final result for X(^{225}RaO) is -7532 which is surprisingly close to that in ^{205}TlF but has different sign. The obtained results are discussed and the quality of the calculations is analyzed. The value is of interest for a proposed experiment on RaO [PRA 77, 024501 (2008)] due to a very large expected Schiff moment of the ^{225}Ra nucleus. | Calculation of the parity and time reversal violating interaction in ^{225}RaO |
Pressure-volume-temperature data, along with dielectric relaxation measurements, are reported for a series of polychlorinated biphenyls (PCB), differing in the number of chlorine atoms on their phenyl rings. Analysis of the results reveals that with increasing chlorine content, the relaxation times of the PCB become governed to a greater degree by density, rho, relative to the effect of temperature, T. This result is consistent with the respective magnitudes of the scaling exponent, gamma, yielding superpositioning of the relaxation times measured at various temperatures and pressures, when plotted versus rho^gamma/T. While at constant (atmospheric) pressure, fragilities for the various PCB are equivalent, the fragility at constant volume varies inversely with chlorine content. Evidently, the presence of bulkier chlorine atoms on the phenyl rings magnifies the effect density has on the relaxation dynamics. | Effect of Chemical Structure on the Isobaric and Isochoric Fragility in Polychlorinated Biphenyls |
Runway incursions are among the most serious safety concerns in air traffic control. Traditional A-SMGCS level 2 safety systems detect runway incursions with the help of surveillance information only. In the context of SESAR, complementary safety systems are emerging that also use other information in addition to surveillance, and that aim at warning about potential runway incursions at earlier points in time. One such system is "conflicting ATC clearances", which processes the clearances entered by the air traffic controller into an electronic flight strips system and cross-checks them for potentially dangerous inconsistencies. The cross-checking logic may be implemented directly based on the clearances and on surveillance data, but this is cumbersome. We present an approach that instead uses ground routes as an intermediate layer, thereby simplifying the core safety logic. | Route-Based Detection of Conflicting ATC Clearances on Airports |
Extending deterministic compartments pharmacokinetic models as diffusions seems not realistic on biological side because paths of these stochastic processes are not smooth enough. In order to extend one compartment intra-veinous bolus models, this paper suggests to modelize the concentration process $C$ by a class of stochastic differential equations driven by a fractional Brownian motion of Hurst parameter belonging to $]1/2,1[$. The first part of the paper provides probabilistic and statistical results on the concentration process $C$ : the distribution of $C$, a control of the uniform distance between $C$ and the solution of the associated ordinary differential equation, an ergodic theorem for the concentration process and its application to the estimation of the elimination constant, and consistent estimators of the driving signal's Hurst parameter and of the volatility constant. The second part of the paper provides applications of these theoretical results on simulated concentration datas : a qualitative procedure for choosing parameters on small sets of observations, and simulations of the estimators of the elimination constant and of the driving signal's Hurst parameter. The relationship between the estimations quality and the size/length of the sample is discussed. | A Pathwise Fractional one Compartment Intra-Veinous Bolus Model |
Ionised gas kinematics provide crucial evidence of the impact that active galactic nuclei (AGN) have in regulating star formation in their host galaxies. Although the presence of outflows in AGN host galaxies has been firmly established, the calculation of outflow properties such as mass outflow rates and kinetic energy remains challenging. We present the [OIII]5007 ionised gas outflow properties of 22 z$<$0.1 X-ray AGN, derived from the BAT AGN Spectroscopic Survey using MUSE/VLT. With an average spatial resolution of 1" (0.1-1.2 kpc), the observations resolve the ionised gas clouds down to sub-kiloparsec scales. Resolved maps show that the [OIII] velocity dispersion is, on average, higher in regions ionised by the AGN, compared to star formation. We calculate the instantaneous outflow rates in individual MUSE spaxels by constructing resolved mass outflow rate maps, incorporating variable outflow density and velocity. We compare the instantaneous values with time-averaged outflow rates by placing mock fibres and slits on the MUSE field-of-view, a method often used in the literature. The instantaneous outflow rates (0.2-275 $M_{\odot}$ yr$^{-1}$) tend to be 2 orders of magnitude higher than the time-averaged outflow rates (0.001-40 $M_{\odot}$ yr$^{-1}$). The outflow rates correlate with the AGN bolometric luminosity ($L_{\rm bol}\sim$ 10$^{42.71}$-10$^{45.62}$ erg/s) but we find no correlations with black hole mass (10$^{6.1}$-10$^{8.9}$ M$_{\odot}$), Eddington ratio (0.002-1.1) and radio luminosity (10$^{21}$-10$^{26}$ W/Hz). We find the median coupling between the kinetic energy and $L_{\rm bol}$ to be 1%, consistent with the theoretical predictions for an AGN-driven outflow. | BASS XXXI: Outflow scaling relations in low redshift X-ray AGN host galaxies with MUSE |
We investigate theoretically and numerically quantum reflection of dark solitons propagating through an external reflectionless potential barrier or in the presence of a position-dependent dispersion. We confirm that quantum reflection occurs in both cases with sharp transition between complete reflection and complete transmission at a critical initial soliton speed. The critical speed is calculated numerically and analytically in terms of the soliton and potential parameters. Analytical expressions for the critical speed were derived using the exact trapped mode, a time-independent, and a time-dependent variational calculations. It is then shown that resonant scattering occurs at the critical speed, where the energy of the incoming soliton is resonant with that of a trapped mode. Reasonable agreement between analytical and numerical values for the critical speed is obtained as long as a periodic multi-soliton ejection regime is avoided. | Quantum reflection of dark solitons scattered by reflectionless potential barrier and position-dependent dispersion |
In this memoir, we study the even unimodular lattices of rank at most 24, as well as a related collection of automorphic forms of the orthogonal, symplectic and linear groups of small rank. Our guide is the question of determining the number of p-neighborhoods, in the sense of M. Kneser, between two isometry classes of such lattices. We prove a formula for this number, in which occur certain Siegel modular forms of genus 1 and 2. It has several applications, such as the proof of a conjecture of G. Nebe and B. Venkov about the linear span of the higher genus theta series of the Niemeier lattices, the computation of the p-neighborhoods graphs of the Niemeier lattices (the case p = 2 being due to Borcherds), or the proof of a congruence conjectured by G. Harder. Classical arguments reduce the problem to the description of the automorphic representations of a suitable integral form of the Euclidean orthogonal group of R^24 which are unramified at each finite prime and trivial at the archimedean prime. The recent results of J. Arthur suggest several new approaches to this type of questions. This is the other main theme that we develop in this memoir. We give a number of other applications, for instance to the classification of Siegel modular cuspforms of weight at most 12 for the full Siegel modular group. | Formes automorphes et voisins de Kneser des r\'eseaux de Niemeier |
The problem of testing low-degree polynomials has received significant attention over the years due to its importance in theoretical computer science, and in particular in complexity theory. The problem is specified by three parameters: field size $q$, degree $d$ and proximity parameter $\delta$, and the goal is to design a tester making as few as possible queries to a given function, which is able to distinguish between the case the given function has degree at most $d$, and the case the given function is $\delta$-far from any degree $d$ function. A tester is called optimal if it makes $O(q^d+1/\delta)$ queries (which are known to be necessary). For the field of size $q$, the natural $t$-flat tester was shown to be optimal first by Bhattacharyya et al. for $q=2$, and later by Haramaty et al. for all prime powers $q$. The dependency on the field size, however, is a tower-type function. We improve the results above, showing that the dependency on the field size is polynomial. Our approach also applies in the more general setting of lifted affine invariant codes, and is based on studying the structure of the collection of erroneous subspaces. i.e. subspaces $A$ such that $f|_{A}$ has degree greater than $d$. Towards this end, we observe that these sets are poorly expanding in the affine version of the Grassmann graph and use that to establish structural results on them via global hypercontractivity. We then use this structure to perform local correction on $f$. | Improved Optimal Testing Results from Global Hypercontractivity |
Reaction-diffusion equations are one of the most common mathematical models in the natural sciences and are used to model systems that combine reactions with diffusive motion. However, rather than normal diffusion, anomalous subdiffusion is observed in many systems and is especially prevalent in cell biology. What are the reaction-subdiffusion equations describing a system that involves first-order reactions and subdiffusive motion? In this paper, we answer this question. We derive fractional reaction-subdiffusion equations describing an arbitrary number of molecular species which react at first-order rates and move subdiffusively with general space-dependent diffusivities and drifts. Importantly, different species may have different diffusivities and drifts, which contrasts previous approaches to this question which assume that each species has the same movement dynamics. We derive the equations by combining results on time-dependent fractional Fokker-Planck equations with methods of analyzing stochastically switching evolution equations. Furthermore, we construct the stochastic description of individual molecules whose deterministic concentrations follow these reaction-subdiffusion equations. This stochastic description involves subordinating a diffusion process whose dynamics are controlled by a subordinated Markov jump process. We illustrate our results in several examples and show that solutions of the reaction-subdiffusion equations agree with stochastic simulations of individual molecules. | Reaction-subdiffusion equations with species-dependent movement |
In the present note we obtain new results on two conjectures by Csordas et al. regarding the interlacing property of zeros of special polynomials. These polynomials came from the Jacobi tau methods for the Sturm-Liouville eigenvalue problem. Their coefficients are the successive even derivatives of the Jacobi polynomials $P_n(x;\alpha,\beta)$ evaluated at the point one. The first conjecture states that the polynomials constructed from $P_n(x;\alpha,\beta)$ and $P_{n-1}(x;\alpha,\beta)$ are interlacing when $-1<\alpha<1$ and $-1<\beta$. We prove it in a range of parameters wider than that given earlier by Charalambides and Waleffe. We also show that within narrower bounds another conjecture holds. It asserts that the polynomials constructed from $P_n(x;\alpha,\beta)$ and $P_{n-2}(x;\alpha,\beta)$ are also interlacing. | On conjectures by Csordas, Charalambides and Waleffe |
The popularity of business intelligence (BI) systems to support business analytics has tremendously increased in the last decade. The determination of data items that should be stored in the BI system is vital to ensure the success of an organisation's business analytic strategy. Expanding conventional BI systems often leads to high costs of internally generating, cleansing and maintaining new data items whilst the additional data storage costs are in many cases of minor concern -- what is a conceptual difference to big data systems. Thus, potential additional insights resulting from a new data item in the BI system need to be balanced with the often high costs of data creation. While the literature acknowledges this decision problem, no model-based approach to inform this decision has hitherto been proposed. The present research describes a prescriptive framework to prioritise data items for business analytics and applies it to human resources. To achieve this goal, the proposed framework captures core business activities in a comprehensive process map and assesses their relative importance and possible data support with multi-criteria decision analysis. | Prioritising data items for business analytics: Framework and application to human resources |
We consider the following scheduling problem. There is a single machine and the jobs will arrive for completion online. Each job j is preemptive and, upon its arrival, its other characteristics are immediately revealed to the machine: the deadline requirement, the workload and the value. The objective is to maximize the aggregate value of jobs completed by their deadlines. Using the minimum of the ratios of deadline minus arrival time to workload over all jobs as the slackness s, a non-committed and a committed online scheduling algorithm is proposed in [Lucier et al., SPAA'13; Azar et al., EC'15], achieving competitive ratios of 2+f(s), where the big O notation f(s)=\mathcal{O}(\frac{1}{(\sqrt[3]{s}-1)^{2}}), and (2+f(s*b))/b respectively, where b=\omega*(1-\omega), \omega is in (0, 1), and s is no less than 1/b. In this paper, without recourse to the dual fitting technique used in the above works, we propose a simpler and more intuitive analytical framework for the two algorithms, improving the competitive ratio of the first algorithm by 1 and therefore improving the competitive ratio of the second algorithm by 1/b. As stated in [Lucier et al., SPAA'13; Azar et al. EC'15], it is justifiable in scenarios like the online batch processing for cloud computing that the slackness s is large, hence the big O notation in the above competitive ratios can be ignored. Under the assumption, our analysis brings very significant improvements to the competitive ratios of the two algorithms: from 2 to 1 and from 2/b to 1/b respectively. | Improved Competitive Analysis of Online Scheduling Deadline-Sensitive Jobs |
We study the formation dynamics of a spontaneous ferromagnetic order in single self-assembled CdMnTe quantum dots. By measuring time-resolved photoluminescence, we determine the formation times for QDs with Mn ion contents x varying from 0.01 to 0.2. At low x these times are orders of magnitude longer than exciton spin relaxation times evaluated from the decay of photoluminescence circular polarization. This allows us to conclude that the direction of the spontaneous magnetization is determined by a momentary Mn spin fluctuation rather than resulting from an optical orientation. At higher x, the formation times are of the same order of magnitude as found in previous studies on higher dimensional systems. We also find that the exciton spin relaxation accelerates with increasing Mn concentration. | Magnetic polaron formation and exciton spin relaxation in single CdMnTe quantum dots |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.